Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I'm with you on that one - as far as I'm concerned, purpose of this discussion (and many other discussions on SGL) is to learn and exchange ideas.
  2. In my view, yours is better rendition then the APOD. Only thing that I object is lack of brightness range. I've seen this in some images lately and I can't really tell what is going on (possibly feature of some widely used PI script). It is like "cap" has been placed on brightness and maximum allowed value is grey and not white. Look at this comparison to get the idea of what I'm saying: Left is your processing converted to grayscale and right is Hubble team processing (most likely of the same data) that we can often see online - again converted to grayscale. For some unknown reason yours simply stops midway and maximum brightness is left at medium gray level (for that reason image looks rather "flat").
  3. Offset is not important for sub exposure length. Use gain setting that you will be using for imaging. If you want to determine what is best tradeoff for sub length - here are guidelines: 1. How much data you want to stack and process? Shorter subs mean more data. Some algorithms like more data, others like good SNR per sub 2. How likely is it that you'll get ruined sub (for whatever reason - wind, earthquake, airplane flying thru the FOV - whatever makes you discard the whole sub - satellite trails can be easily dealt with in stacking if you use some sort sigma reject). Longer discarded subs mean more imaging time wasted 3. Differences in setup - in general, you'll have different sub length for each filter, but sometimes you will want to keep single exposure length over range of filters (like same exposure for LRGB and same for NB filters) as this simplifies calibration - only one set of darks instead of darks for each filter 4. What is the increase in noise that you are prepared to tolerate? Only difference between many short subs and few long subs (including one long sub lasting whole imaging time) - totaling to same total imaging time - is in read noise. More specifically, difference comes down to how small read noise is compared to other noise sources in the system. When using cooled cameras and shooting faint targets - LP noise is by far the most dominant noise source, that is why we decide based on it, but it does not have to be (another thing to consider when calculating). If you have very dark skies and use NB filters - it can turn out that thermal noise is highest component, so this calculation should be carried out against it instead. In fact - you want "sum" of all time dependent noise sources (which are target shot noise, LP noise and dark current or thermal noise - all depend on exposure length) and compare that to read noise. Read noise is only time independent type. Noises add like linearly independent vectors - square root of sum of squares. This is important bit, because this means that total increase is small if you have components that are significantly different in magnitude. Here is example: Let's calculate percentage of increase if we have LP noise that is same, twice as large, 3 times as large and 5 times as large as read noise. "sum" of noises will be sqrt( read_noise^2 + lp_noise^2) so we have following: 1. sqrt(read_noise^2 + (1 x read_noise)^2) = sqrt( 2 * read_noise^2) = read_noise * sqrt(2) = read_noise * 1.4142 ... or 41.42% increase in total noise due to read noise 2. sqrt(read_noise^2 + (2 x read_noise)^2) = sqrt(5 * read_noise^2) = read_noise * sqrt(5) = read_noise * 2.23607 = (2 * read_noise) * (2.23607/2) = (2*read_noise) * 1.118 or 11.8% increase (over LP noise which is 2*read_noise in this case) 3. sqrt(read_noise^2 + (3 x read_noise)^2) = sqrt(10 * read_noise^2) = read_noise * sqrt(10) = read_noise * 3.162278 = (3 * read_noise) * 1.054093 = 5.4% increase over LP noise alone (which is 3*read_noise here) 4. sqrt(read_noise^2 + (5 x read_noise^2) = sqrt(26 * read_noise^2) = read_noise * 5.09902 = (5* read_noise) * 1.0198 = 1.98% increase over LP noise alone From this you can see that if you opt for read noise to be x3 smaller than LP noise - it will be the same as having only 5.4% larger LP noise and no read noise, and if you select x5 smaller read noise - it will be like you increased LP noise by only 1.98% (and no read noise). Most people choose either x3 or x5 - but you can choose any multiplier you want - depending how much you want to impact final result. Thing is - as you start increasing multipliers - gains get progressively smaller, so there is really not much point going above ~ x5 Ok, but how to measure it? That is fairly easy - take any of your calibrated subs and convert to electrons using e/ADU for your camera. CCD will have fixed system gain, while gain on CMOS will depend on selected gain. Pay attention when using CMOS cameras if your camera has lower bit count than 16 bits. In that case you need to additionally divide with 2^(16-number_of_bits) - or divide with 4 for 14 bit camera, with 16 for 12bit camera and 64 for 10bit camera. When you prepare your sub - just select empty background and measure mean, or even better median electron value on it (median is better if you select odd star or very faint object that you don't notice). This will give you background value in electrons. Square root of this value is your LP noise. You need to increase exposure until this LP noise value is your factor times larger than read noise of your camera. Alternatively, if you want to get exposure from single frame - take your read noise, multiply with selected factor, square it and this will give you "target" LP level. You need to expose for "target" / "measured" longer (or shorter - depending on number you get). Makes sense?
  4. Any such software tool is bound to be very imprecise. This is because you can't model every little bit of in order to accurately calculate optimum exposure length - and even then - there is no such thing as optimum exposure length. Optimum implies best possible value - and in reality best possible value for any setup is single exposure as long as total imaging time. As long as we use cameras that have read noise - optimum / ideal solution is that one - single very long exposure. What you can do instead is accept some level of tradeoff. What level of tradeoff you accept has much more impact on calculated sub length than other parameters. Some parameters that you can select also have very large range of "implied" values to be of any use. For example - you say to take Bortle 4 or 5 sky. That is roughly 19.1 to 21.3 SQM - more than 2 magnitudes of difference. That is more than x6.3 in sky brightness between two extreme points. Consequently - that is more than x2.5 in LP noise levels. This translates into sub length difference for all other conditions of about x2.5 On one side of spectrum we have for example 1 minute and on other 2.5 minutes exposure - this is just by being indecisive about what exactly is our sky brightness. Best way to tackle this problem is to actually measure values - measure you background sky flux and your read noise and based on these two parameters alone you can select tradeoff that you are ready to accept.
  5. I can't really tell from the image, but I'm starting to doubt that it is in fact T2 thread. T2 thread is actually M42 x 0.75 metric thread and you want it in male variety - thread being cut on the outside of the tube so you can simply screw on the camera with adapter. It looks like this: This is from video showing how to attach DSLR to ST80 - it also has 1.25" focuser, but that one has outer T2 thread. Here is short video itself https://www.youtube.com/watch?v=UBq0yKbtsuQ This is probably your best option. There are few other options - like moving primary or replacing focuser all together (for example, you can maybe adopt this one: https://www.teleskop-express.de/shop/product_info.php/info/p7836_TS-Optics-1-25--Crayford-Newtonian-Focuser---metal---with-T2-connection.html but I suspect you would need to 3d print base as this looks like unnecessarily tall and suited for 6" tube and I think there is still risk of it being too high for imaging), but those all require investment and cutting up the scope. Using barlow is straight forward as it moves focal plane further out, but there are some caveats that you should be aware of. - You will often hear that barlow slows down the scope, and that is true, but what is implied by that is not true. F/ratio is not the measure of the speed of astrophotographic setup. In effect with x2 barlow, you will be working at F/8.8 rather than at F/4.4 and that is fine - You will need to bin your data to get the same working resolution as without barlow. You can bin your data after you stack it, while it is still linear, before you start processing it. This will recover any SNR loss due to using "zoom". If you increase focal length of telescope (using a barlow) and you increase pixel size (using binning) - you will end up with same effective setup. Research how to best bin your data with softare that you use. - You will need longer individual exposures compared to not using barlow - to swamp read noise. Total integration time can stay the same when you bin, but you need longer subs. Instead of using 240 x 1minute - use 60 x 4 minutes (for example). - Barlow usually narrows field of view, but in this case, it is ok as you'll be using only central portion of FOV anyway because of Coma and also because of vignetting. It effectively helps you utilize more of your sensor (although to the same effect as not using one and cropping more) - Drawback is that it adds more weight to already unsuitable focuser. If you want to solve all of these issues and still practice with newtonian design - solution is fairly simple provided that your star tracker can handle more weight: https://www.firstlightoptics.com/reflectors/skywatcher-explorer-130p-ds-ota.html It has adequate 2" focuser, it has properly sized secondary so it will illuminate APS-C sized sensor, you can use coma correctors with it to get large usable field. Only drawback is that it is heavier at 4kg and together with camera and accessories will push your mount to its limits (I think ~5kg is max payload for SkyGuider Pro?) Bonus is - above OTA will be still usable on larger mount for wonderful images.
  6. Btw, you can use barlow lens to move focus point outward or shorten the tube like in this video
  7. You should be able to use mirrorless camera on such scope with just T2 ring screwed directly to focuser tube (see if eyepiece clamp unscrews to reveal T2). If not - there are designs for "low profile" 1.25" focuser that you can 3D print in order to be able to attach DSLR to focuser. See this as example https://www.thingiverse.com/thing:2552565 (it is for celestron astromaster so it has different base - but you get the idea). I do have to warn you that you will run into multiple issues with that scope / focuser for imaging. - That focuser is not really capable of holding anything heavy and remain straight - At F/4.4 you are going to get so much coma - only very narrow central region of APS-C or 4/3 sensor will be usable (forget full frame). - There is no coma corrector for 1.25" format. - You'll get serious vignetting due to small secondary.
  8. My personal view is that there is no such thing as "sufficient" guiding. Lower RMS is always better, regardless of sampling rate applied. It is true that at some point we enter domain of diminishing returns, and it is good to know where this domain is, but it is not related to focal length nor to sampling rate. It is related to other contributors to final FWHM - namely seeing and aperture size. To answer the poll questions: For both A and B - question does not make sense as guide RMS and if we deem it "sufficient" is not related to either sampling rate or focal length. In order for Guide RMS to be "inconsequential" to total FWHM - it should be at least x3-x5 smaller than largest component that contributes to FWHM. This is usually the seeing. If we convert all values to RMS, then we can see that 2" FWHM seeing ~= ~0.85" RMS 3" FWHM seeing ~= ~1.275" RMS 80mm aperture Airy disk ~= 0.6" RMS From this we can see that even for small scope of 80mm in poor seeing of 3" we need to guide at 0.25" - 0.425" for guiding change to make minimal difference. As for rule of thumb that guiding RMS should be half of pixel scale - well, that is just looking things "in reverse". It is ok to give some peace of mind, but if one wishes to get the best out of their gear - then it is not sufficient. It is sampling rate that should be adjusted to match achieved FWHM and not the other way around (we can't really impact FWHM, but we can match pixel scale to FWHM, either by using reducers, or binning or other means ...).
  9. If you struggle with the stars - do give StarNet v2 a go. I think it will be useful for this image. I tried it when I processed Crescent nebula imaged by Rodd and it works a treat.
  10. I think that you maybe made target too bright. If you push curves that much, fine detail starts to be lost (like over exposed image). It is very delicate object, maybe try to make it a bit more "airy" and not as "glowing"
  11. I think there is some sort of issue with ASI294MC on certain gain settings that makes sensor non linear - it saturates on values lower than 65535 (around 50000, I believe). This might be the cause of issue with auto flats that I encountered (well, I just helped diagnose and mitigate it - I did not actually use the software or have the camera).
  12. Excellent design. Not sure why 0.965" ep thought, 1.25" should be doable in that size? And how's that cat for size standard ?
  13. I don't know - I'm poking in the dark really. I do remember some people having issues with flat calibration when using automatic flats in APT. Once they switched to "lights" mode for taking their flats and manually selected exposure length - flats started working. In general, I distrust any sort of automatic feature unless I'm 100% percent sure that I know how it is working. Some software can automatically calibrate subs (like SharpCap for example), and given that I mistook out of focus stars for inverted dust particles - I assumed (without any real foundation - except for that automatic flat issue - which might not have anything to do with APT itself, but rather non linear / clipping camera data, and knowledge that some software has this feature) that something weird is going on. In light of those features being out of focus stars - I'd say that it is highly unlikely that it is software fault, but I do still think that ASCOM drivers should be used over native ones, just in case. That is actually very easy to figure out - set of darks, and bias is taken with each driver and compared. If they are the same, then either driver is ok, but if mean ADU values or noise levels are different - I'd investigate further or simply use ASCOM as it is know to work well.
  14. For planetary - big chance you will end up with kaleidoscope. Maybe limit is use for EEVA and wide field if you decide to get one.
  15. Magnification will depend solely on focal length of eyepiece. If you have eyepiece that looks like this: has small eye lens and 6mm of focal length and you have something that looks like this: 6.5mm Morpheus with huge eye lens. Both will provide the same magnification. Planet like Jupiter will be same / similar size in both. Sharpness and definition of the view will not depend on how big eye lens is. Sometimes sharpest and best views come from those small eyepieces (monocentric, orthoscopic and so on). What does change with large eye lens is observing comfort and apparent field of view. You simply need larger eye lens to get good eye relief and to have wider apparent field of view. Rest of the eyepiece in most cases serves the same purpose. Eyepiece is large if it has more elements inside and larger elements inside. These usually serve to get wide apparent field of view and comfortable viewing. Simple eyepiece designs have 3-4 lenses in them, while complex can have up to 10 or more. More is not always better and seasoned observers often value less glass rather than more.
  16. Maybe do just that last test with NINA and ASCOM drivers and then move on to selling the camera? Just in case?
  17. Good point - haven't thought about that. I've seen similar dust particle in flat produced with flat panel - and concluded that it is the same thing, but you are quite right - it is much more likely that it is out of focus star, especially if sky flats are taken later in the evening.
  18. @BrendanC Something is wrong with your files - namely with master dark. Did you include master dark from 50s or 150s exposure instead of shooting dedicated darks with same exposure settings as sky subs? Master dark you supplied has higher mean ADU value (by far) than sky flats you supplied. Actually, never mind that - I just saw that you used same exposure length from sky and panel flats and master flat dark seems to match that exposure length so it can be used for both. I'm now going to do it like that. There is quite a bit of over correction in the result I got: However, I'm concerned about quality of the data. Look at this: This is stack of your sky flats without any alteration (no calibration of any kind - just regular average) - and that file has dust shadows inverted! This cannot happen regularly and it must be consequence of some sort of data manipulation. Here is what you need to do - ditch APT and switch to NINA, repeat the procedure, this time paying attention so that you : 1. Take sky flats of proper exposure length (ones you posted are way too short and their value is 300 out of 65000 or about 1/200, while your regular flats are about 20000 out of 65000 or about 1/3 - which is good) 2. Take again panel flats like you did 3. Take associated darks for sky flats (matching gain, offset, exposure length) 4. Take associated darks for panel flats (again matching gain, offset, exposure length) take again 10 of each - but this time, post all files, don't create masters to save bandwidth (just to make sure something does not happen out of ordinary when stacking them). It is important that you shoot with ASCOM drivers using NINA instead of APT - I'm afraid that APT does something weird like "automatic calibration" when taking images - that you don't know of and that messes up your entire calibration procedure after.
  19. That is really not much. My monthly quota is 200GB, but wife and I recently took a liking to Netflix and it being a streaming service - well, eats up some of it (I did limit it to 2mbps as soon as I realized how much data full hd stream can gobble up in no time). It's downloaded now, and will report my findings when I have a look at it.
  20. Is it already that time of the year?
  21. Yes, but I'm actually saying that you should really perform pixel math and see what you get. You should stack "sky flat darks" and "sky flats" - using average, and similarly "panel flat darks" and "panel flats" - just using plain average stacking without alignment. Siril should be able to do this, if not - ImageJ will certainly do it (and I can do it for you - or explain how to do it - which ever you prefer). Then you should subtract respective darks from "lights" (or rather flats - we run a risk of confusion by terminology here ). In the end - you should divide two resulting images. If you want "equation" - it looks like this: (sky_flats_stack - sky_flat_darks_stack) / (panel_flats_stack - panel_flat_darks_stack) Then take resulting image and see if it has any gradients - it should be rather "flat" (here flat meaning no gradients ).
  22. I guess that you can follow @markse68 's advice and stack in Siril without registration. Calibrate with matching darks and use only regular average - no normalization is needed for this to work. Alternatively - if you don't have bunch of files (and you should not - just a dozen from each is enough) - you can upload them somewhere and I'll stack those for you in ImageJ (which is another option to stack your subs without registration).
  23. Exquisite processing. I'm usually not very fond of stars in RASA setups (and attempts to control their size) - but these look just right. In fact - whole image looks just right.
  24. Don't get me wrong - I'm not saying that what you did is wrong - quite the opposite - it should be part of standard workflow - refocus on filter change. Having motor focuser makes life much easier in regard to this, but what you've done is also time saver - to record offset for filter and then apply it on filter change (this is what software with motor focuser also does). What I'm saying is that color issue is not that much of an issue - in sense that it can't be fixed. People shooting with OSC can't refocus for each color and they can employ simple trick to remove blue bloat in software. Other than that - I've listed some of methods to deal with it and generally improve sharpness.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.