Jump to content

vlaiv

Members
  • Posts

    13,264
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. That won't work well. ASI224 is very small sensor with small pixels and both scopes are long FL scopes. Just to get an idea of what the FOV will look like - use astronomy.tools FOV calculator: Yes - that is the heart of Orion's nebula M42 - only trapezium can be seen. FOV is tiny. Even adding something like x0.5 reducer will not make things better: However, look at that scope with DSLR on that target: Now we are getting somewhere ... Not to mention that 1000mm FL with 3.75µm pixel size of ASI224 will give 0.77"/px - so that needs to be binned x2 which will result in image size being 650x450px
  2. I think we can easily come up with x100 figure. You say that light gathering surface is ~x6.5, although Wiki gives ~ x5.61 (25.4m2 vs 4.525m2). Then there is part of spectrum that it covers - HST covers something like 1.5µm range (UV/Vis/NIR) while JWST covers about 28µm range (0.6µm-28.3µm to be precise). There ya go - 5.61 * 28 / 1.5 = 104.72 104.72 - more "photon gathering capability"
  3. I'm all for dedicated astro camera, and I think you should get that, but DSLR is really good way to get started and learn basics. I started out with web camera, then moved onto planetary camera with which I did some DSO imaging. I was hooked fairly quickly so I moved onto cooled cameras. There is not much to it really. Speed of telescope is determined by two factors - one being aperture and another being pixel size - or more precisely - how much sky is covered by single pixel. We say that 4" F/4 scope is faster than 4" F/5 scope simply because later has longer focal length and "more zoom" (although zoom is not correct term in this case) - meaning each pixel covers less sky than in case of F/4 scope with shorter FL. Binning of pixels is just joining group of them to act as single pixel. You can think of it as being buckets left out in the rain. Single bucket will gather some rain water, but if you put 4 buckets next to each other (2x2) and measure amount of rain they collect - it will be x4 as much. In the same way binned pixels gather x4 more photons (in case of 2x2 bin) than single pixel, making them more sensitive to light. Downside of binning is that you loose resolution. In astro imaging, especially with DSO - resolution is in fact determined by other factors - like seeing, mount performance and so on. It is for this reason that you can safely afford to bin even 4x4 with your scope (being very long focal length scope). As beginner, you want to be somewhere in range of 1.5-2"/px (each pixel covering 1.5 to 2 arc seconds of the sky). There is simple formula that determines how much sky is covered by each pixel - 206.3 * pixel_size / focal_length (pixel size in µm and focal length in mm). If you take modern DSLR with 6000x4000 px resolution - its pixels are usually 3.8µm in size. With above formula for your scope, you'll get 206.3 * 3.8 / 2436 = ~0.32"/px. This is extremely high sampling rate (low number means small part of sky covered by single pixel) and you need to bin that considerably to get to above. With bin 4x4 you'll be at 0.32 x 4 = ~1.28. Still lower than I would like, but it will provide you with ok images. This will give you image size of roughly 1500x1000 (a bit larger than ASI224 with ~1300x900). If you add some sort of reducer - you'll get better working resolution, however these tend to be expensive. Maybe this one: https://www.teleskop-express.de/shop/product_info.php/info/p8932_TS-Optics-Optics-2--CCD-Reducer-0-67x-for-RC---flatfield-telescopes-ab-F-8.html
  4. While purchasing suitable equipment is good way to go about it, why don't you start with what you have? Get second hand DSLR and use your Classical Cassegrain. Is it 6" or 8" model? Maybe you already have DSLR? These scopes have large corrected field and although they are "slow" at F/12 - they can be very nice imaging instruments if handled properly. Most people are worried about their "photographic speed", however - there is easy way to make them "faster". Say you purchase 6000x4000px DSLR camera. If you bin x2 your data and accept 3000x2000 images - you'll get F/6 scope. If you bin x3 your data and accept 2000x1333 px images - you'll have effective F/4 scope.
  5. Excellent image. I like colors the way they are (except for red halo on most stars - but that is the feature of the lens - still some CA left in there even at F/8).
  6. I know, I know - funny thing to have first light of, but I finally moved to a new house two weeks ago and this was first time I was out observing. Its not due to weather - I simply could not find the time or was too exhausted from moving to get myself out. House is still not finished (terrace stairs, insulation, and obsy yet to be done) so I have bunch of building material scattered all over the yard. Some of it is for observatory , but in any case - not really safe environment to move at night in the dark. Expectations were high. I did 99% of my astronomy (visual and imaging) from ~SQM18 - 18.5 skies in large(ish) city. Now I'm in countryside at 250m above sea level (city was 79m officially), according to lightpollutionmap.info - SQM20.84, so at least 2 magnitudes of improvement if not more. It was not overly late session - I started setting up at 8pm (something I would never do in the city due to all the lights around - I'd usually wait past midnight to do any sort of observing) and by 9pm I figured that 8" dob is properly cooled so I can start session. I sat there in the dark for maybe 10-15 minutes before starting and my first impression was not good. Sky is still not black - it is blue / gray. Maybe a bit more blue and bit less grey than in city. Also - that orange glow is not present, but it is far less dark than I expected. I have several light domes scattered over horizon, all the major cities near by provide one, but at 30° and up - it is much better. Naked eye showed Milky Way, but it was far from distinct. Major contours could be recognized. Summer triangle / Cygnus was directly overhead and start of the Great Rift was clearly seen. M31 was alternating between direct / averted vision object. I was not lost in stars - I could still easily make out major constellations. Did not think of trying to determine NELM. At eyepiece, things changed. I could easily find targets and see them - no problem. I did not pay great attention to each object - I was more jumping around trying to quickly asses what can be seen and how easily. Objects that I saw: M2, M15, M71 - all observed with 28mm ES68°. M2 and M15 were just smudges - I did not try higher magnifications to start to resolve stars. M71 had distinct shape. Found NGC7331 without too much issues, still 28mm ES68° - very nice oval blob. M27 - Football shaped, I observed it with 28mm and 16mm ES68. Vega - now, I don't usually observe bright stars, but I really enjoyed seeing razor sharp diffraction spikes on such bright star. I only turned telescope there to check collimation. I've been transporting scope just week ago and I haven't checked collimation in quite a long time - all was good and sight was rather pleasing. Double/Double - ES82° 11mm - not what we would call clean split - stars had a bit of "fuzz" due to seeing and they were "kissing". Then I realized that Jupiter is very prominent in Southern sky - and I turned the scope on it. My first reaction was - "Why is this image so over exposed" . Jupiter was sooo bright that I really could not see any features on it at all - it was just sort of white/yellowish blob. After few moments when my night vision went away, I was presented with one of best Jupiter images to date. Seeing has been very good in these parts lately (so I've heard from other observers) and indeed - it was not perfect tonight, but there were moments of exceptional clarity. I quickly switched eyepiece to 5.5mm ES62° (am I ES fanboy? ), in part to further bring down brightness. I enjoyed the planet for several minutes. Saturn was behind a tree and I could not be bothered to move further down the yard in order to find better spot (there is still building material scattered and I did not want to risk it). I decided to sit again in darkness to regain some of night vision. Pleiades started rising in the east. Saw few meteors. M31/M32/M110 - all were easily seen, better than any single session from the city. First dust lane was direct vision feature, second was averted vision. NGC206 was seen as well (not the stars - only slight brightening in place where I expected it to be). M33 - this was first time ever for me. Easily seen both in finder and in scope - featureless round blob, but definitively there - direct vision M81/M82 - well, I really did not expect to see these. They were at only ~25° Alt in direction of major city and hence in one of those light domes. Clearly seen, oval and elongated shape - better than any time from the city. By the end of the session I spend some time just cruising MW around Cassiopeia. ES68 28mm has some AMD - not sure if I noticed that before. I have mixed feeling about all of this. I saw many objects with ease that I did not expect, yet I was somewhat underwhelmed by the looks of the sky at the start of the session.
  7. Check your Bayer matrix settings in AS!3. It should be RGGB for ASI224. If you get that wrong - it will make funny color image. You should end up with green rather than blue image and then in Registax - hit RGB balance - it should produce nice looking colors. 12 minute video is too long for Jupiter as planet rotates. If you need to take videos that long, look into WinJupos and derotation of videos. Another way to improve FPS is to select smaller ROI - like 640x480 or even 320x200. Planet is small enough to fit even very small ROI but you'll loose the moons that way, so choose which option you like best. Using x3 barlow on that scope will give you optimum sampling rate for ASI224, so consider adding one in future.
  8. Look up one of these: https://www.amazon.com/Facon-Surface-Interior-Indicator-Motorhome/dp/B0751K8Y6F/ref=psdc_11439431011_t2_B08S2T5WCW?th=1 or similar. You just need to fashion some way of putting them on telescope.
  9. Indeed. That calculator is not very good in results it gives, so don't rely on it. For example - 0.7"/px is going to be oversampling for 99.99% people in amateur community (and by factor of x2 for 90% of people) - yet it will say it is "OK".
  10. It's probably me, but I don't see any square looking stars? Here - this is 400% zoom: Does any of these stars look square to you? Also, I'm really not sure that 6.5"/px is "seriously oversampling", especially with camera lens. F/4 lens with 200mm focal length has 50mm of aperture. Size of airy disk of perfect 50mm aperture is 5.13" alone. Next - this is lens - here look at this: 6.4µm pixel size is equivalent of 1000/6.4 = ~156 LPMM. This lens has MTF below 80% even on axis for 30+ LPMM
  11. Way too low frame count for Jupiter. You want to capture x10 as much and stack only 5-10% of the best. Use USB 3.0 and ROI (say 640x480). Your camera is capable of 300FPS at these resolutions and with 5ms exposure length you should get at least 180-190FPS. With 4 minute of recording that will give you 4 * 60 * 180 = 43200 frames total. Use Gain 334 with your camera, and use F/11.3 for ASI462mc camera (2.9µm pixel size) - which in your case means, loose the barlow altogether or maybe find a model that will give you x1.2 at most (which means just barlow element mounted real close to sensor). Sub duration needs to be around 5ms or so if you expect to freeze the seeing. Lookup optimizing planetary views on google for tips on how to get good planetary visual and then apply the same for imaging (especially thermal management where you avoid houses and large bodies of water, waiting for scope to cool down and so on).
  12. Out of interest, what each of these do? I'm not PI user, so don't even know what sort of tools are available, but I'm rather interested in learning what these do.
  13. If you read: https://pixinsight.com/tutorials/PCC/ You will see that it is all about photometric filters and "absolute white balance" (what ever that means - that term is really nonsense if you ask me - what we see as white is always perceptual thing - and depends on environmental factors and our adaptation). I completely understand part about photometric filters and conversion between one set to another and all of that. However, that has nothing to do with actual color displayed on the screens. Even that explanation of how PCC works - has section of text dedicated to "The Meaning of Color in Astronomy" - or rather color index of stars. That is quite true, however, there is rather simple way to ask question that will certainly have definitive answer: Given some color produced on the screen and given light from Andromeda galaxy (or rather single color uniform patch of it) of the same intensity as light coming from computer screen, side by side - would you say: These two colors are the same, or these two colors are different.
  14. I was hoping that tilted heads will be dead giveaway - for some reason those skewed star halos make me want to tilt my head to make them "straight".
  15. Here is simple test you can do as a start - just to see how good you are with your processing workflow. Since you have calibrated monitor - that simplifies things quite a bit. Provided that your monitor is sRGB calibrated (6500K white point and 2.2 gamma - or actual sRGB gamma), you can do following test - find any sort of colorful image online and display it on your monitor. Take your camera with lens (this is tricky part as often people don't have lens or way to mount it - but use guide scope with short FL for example) and take image of your monitor in dark room - or rather of that colorful image displayed on your monitor. After processing look at both images side by side - original and your own. They should match in color. You can "spice things up a bit" - and use few short exposures which you'll stack later - that is just to simulate astronomical images and very high dynamic range we get after stacking. After stretching your data - you should again get the same (or very similar) colors. As a first step - you can actually try following - find again colorful image and display it on your screen - and take your mobile phone and take a photo and compare the two. You'll be surprised how much color matching will be done in phone. If anyone can do this with smart phone - I wonder why we could not do the same in AP. Sometimes I feel very similar - I'm afraid that if I'm too technical - people will just dismiss that as being nonsense (and probably is to them) .
  16. Problem with phorometric color calibration is two fold. 1. It's not implemented properly (PixInsight), or is implemented partially (Siril) 2. Stellar sources represent rather narrow part of visible gamut and can be used to correct for atmospheric influence, but won't produce very good results for complete calibration In fact, the way Siril implements it is adequate for atmospheric correction if you already have color calibrated data to begin with Another problem is that we need paradigm shift in the way we process our images. Classic RGB model that most people think in when trying to process images - is suited for content creation, but not for accurate color reproduction. Most software that has color management features is geared towards following goal: Make colors be perceived the same between content creator and content consumer. In another words - if I "design" something on computer under one lighting conditions and then image is printed and put in another lighting conditions in living room - or perhaps object was died and put in kitchen - we want our perception of that color to be the same. In astrophotography, if we want correct colors - we need to think in terms of physics of light (and that is something most people are not very keen on doing) - and what color would we see if we had that starlight in a box next to our computer. We want color on computer screen to match that color of starlight from the box.
  17. Not only that - I can also tell what the "right" colors are Problem is that software solution alone won't produce best results. You also need to add "hardware" part to get correct color. Camera needs to be calibrated to produce accurate colors (similarly to how displays need to be calibrated to be accurate) and that requires shooting some sort of color chart and having calibrated source. I'm planning to do small utility to help with that (generate different calibration targets and reading off results from raw data), but it is matter of available time (which I hope to have a bit more in near future).
  18. That is sort of "normal". Difference between signal in galaxy and sky signal (even darkest skies have some signal to them) is rather small. Point of stretching is to differentiate the two, but it can easily happen that simple stretch will not do that.
  19. Maybe do some wavelet sharpening on those? Also - align RGB channels to correct some of atmospheric dispersion. Even tiny bit of sharpening can pull out detail nicely (this was done on 8 bit image - I'm sure results with 16bit image will be much better):
  20. Not sure how much of a budget do you have for barlow, but I think Baader VIP would be worth considering. It is modular barlow that is of exceptional quality - and is made for imaging (well - VIP stands for Visual and Photo work). You can vary magnification factor by changing distance.
  21. If you want portable mount that will be as precise as HEQ5, maybe check out this one: https://www.firstlightoptics.com/ioptron-mounts/ioptron-cem26-center-balanced-equatorial-goto-mount.html It has step resolution that is closer to HEQ5 - 0.17", HEQ5 having around 0.14 and EQ5 having twice that at about 0.28". It is already "belt modded" - or rather it has belt transmission which smooths things out. It has built in Wifi so you can connect your laptop to it via wifi. It has USB port so you don't need special cable. It has spring loaded worm gear which reduces backlash. It is very light weight compared to other two mounts (eq5 and heq5) and it has very good payload at 12Kg (a bit more then eq5).
  22. I think that this is as good as it gets. F/12.6 is a bit high for 2.4µm pixel size. At 510nm optimum F/ratio is about f/9.4. 9.4 / 12.6 = ~0.75 or 75% If you take your image and down sample it to 75% of posted size - it is very very sharp. For example - this is piece of it resampled: Or this: I'd say that it is as good as it gets.
  23. I see it now - it is really excellent image!
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.