Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. As have been mentioned - you'll need ASCOM and EQMod installed on your laptop and you'll need cable to connect the mount to your laptop. You can buy ready made cable or DIY one yourself (not very hard to do - just connect some wires to connectors).
  2. You have a guiding setup, but you are not guiding. What you are doing with your finder scope and guide camera to polar align with sharpcap is OK and you should continue to do so, however, after you do that and before you start imaging - you should run guider software - PHD2 usually and let it command the scope by examining image from camera and then making corrections - that is what guiding is. What you are doing now is just camera assisted polar alignment. There are several videos on youtube that you can watch that explain how to autoguide with PHD2 or similar guiding software. You should check them out, and if you have any further questions, just ask here - I'm sure people will be happy to help you.
  3. What do you mean by that? Did you mean - you both guided and used sharpcap polar alignment routine, or did you mean by any chance that you "guided via sharpcap polar alignment routine". I ask because there is issue with your guiding - subs appear as if they are not guided at all. You have problem with RA axis - what is most likely periodic error. This is normal with budget mounts (and some more expensive mounts) - and that is the reason people guide. If you are indeed guiding - how are you guiding? What software are you using, what is your guide scope / OAG and guide camera and what is your RMS guide error in arc seconds?
  4. Shifted flats. If you stack your subs without flats - you'll see dark patches in the places where you now see this artifact. When flats shift - flat correction creates "bevel / emboss" effect in places of the dark patches: Same effect - only produced on dust shadow circles. Question is - why did your setup move between lights and flats? Do you have something loose in optical train - like loose focuser? If you put your scope horizontally to take flats - gravity might be causing shift of loose component. Even if you think you have everything tight - if you use 2" nose piece with compression ring - that could tilt (this is why threaded connection is preferred for astrophotography). Other than that - it could be processing issue - for example applying flats after stacking instead of before stacking (subs get shifted in process of registration when stars are aligned). Those blemishes that you linked on a separate single sub - could be again related to dust - but closer to the sensor (maybe on sensor cover window). Do they show on flats as well?
  5. Hi and welcome to SGL. Yes, that is quite "normal" (although we all wish it was not the case). What you are seeing is periodic error of the mount. Worm turns every 638 seconds and in those 638 seconds - pattern will roughly repeat itself. This happens because gears in the mount are not perfectly round. Yes, you have made a belt mod and that removed some of the gears - but main reduction remains - worm and worm gear and these are not round enough. There are two things that you can do to fix this to some extent: 1. Guide - this is what most people do. Guiding will sort out mechanical defects in the mount (to a degree - even it is not all powerful, your mount still needs to be decent to guide well - heq5 guides well in 99% of the cases). 2. Do Periodic error correction, or PEC. PEC is process in which you record average "wobble" of the gears over said period of time - and let software play it back in sync to counter act actual error (software plays back reverse movement or correction). HEQ5 does not have PPEC - permanent periodic error correction like newer mounts, but if you use EQMOD - you can use VSPEC which is nice feature of EQMOD (VS stands for variable speed) - driver itself will slow down and speed up the mount to counter periodic error. In the end, this is recording from my HEQ5: Left to right movement is due to polar alignment error - it is uniform drift over time and not as bad as PE. Vertical oscillatory movement is Periodic Error - sometimes it is slow and smooth - and at moments it is fast changing. You want slow changing periodic error - fast moving is harder to guide out (although here it looks like fast moving - it is really not - each of these frames is one minute long in reality - so my mount guides really well).
  6. Don't think that you'll be close to what the mount can handle in terms of weight. OTA weighs about 6.5Kg - and camera is not very heavy. Guide scope + guide camera are not really heavy either - that is about 3Kg extra weight tops. Overall - you are at 10Kg and that is fine with HEQ5 given that scope is relatively compact (not large OTA). You'll be probably a bit oversampling with 1.41"/px most of the time (2" FWHM seeing) - but in good seeing, you'll be quite close to optimum sampling rate (which would be around 1.55"/px - for your guiding and 1.5" FWHM seeing). It can easily happen that you get consistently good guiding results with this ota as it is smaller and less prone to flex from wind. I do think that something like CEM60 (unfortunately discontinued) is step up from HEQ5 - but I don't think that you need it right away. Just give HEQ5 a go with what you have once your scope arrives and I think that you'll find that it is working rather well as a combination.
  7. Why is that? What sort of guide RMS are you getting at the moment? What resolution do you expect to work with once you get your Esprit 100?
  8. Rough measurement gives ~ 0.12437"/px That in turn gives 4810mm of focal length with 2.9µm pixel size. 305mm is aperture, so effective F/ratio is ~F/15.772 You were using IR pass filter with 685nm cutoff frequency. I'd say that you are roughly twice over sampled - but we can calculate it more precisely. Indeed, critical F/ratio for this combination is ~ F/8.47 - almost the half of what you used. Not only that you don't need to drizzle - you can in fact bin your data x2
  9. Lens are hardly ever diffraction limited. Telescopes are diffraction limited and corrected for infinity. They will always outperform lens on astro images (except very cheap achromat vs lens costing many thousands of pounds). By the way - that is very nice M42 image.
  10. SGL just scales image depending on screen size (smaller on thread page and a bit larger in image display page). You can always open the image in separate window (right click - open link in new tab) and look at it at 100% zoom level. That is how I usually look at all images - at their proper resolution and not scaled.
  11. In order for drizzle to have any chance of bringing improvement - you need to under sample to begin with. What F/ratio were you using for ASI290? In any case - here is non drizzled version upscaled to drizzled version and again - blinked:
  12. Since you posted two images - I just compared the two - did not mean to imply anything. I just did this in my browser (these are respective screen shots - one scaled to 67% its size and other at 100% that way they match in pixel scale) Maybe I just wanted to point out that there was no real need to drizzle as it did not contribute anything - in fact, more aggressive sharpening creates better image. I did not analyze sampling rate and pixel scale. Could be that image is over sampled as ringing is visible (consequence of wavelets / sharpening).
  13. I see no major difference between the two. Second one is just a bit more aggressively sharpened - and that is good, but if you resize them to the same size - they are virtually identical (nothing gained by drizzle).
  14. I regularly use ImageJ for that purpose as long as you export the data as 32bit fits (it is free and open source). I think some other free software may also have it - like Iris (and now looking at Siril - there is discussion if they should implement it???). As far as I know - PI has it under integer resample (choose average method).
  15. Really no much of a contest. 70D has almost double quantum efficiency compared to 300D. 500D has somewhere in between. 26% vs 38% vs 48% (300D, 500D and 70D) 70D has the smallest pixels of the lot - so you'll need to bin your data. Since it has 4.1µm pixel size, I would recommend that you do the following: stack image normally at full resolution and once you are done stacking, while data is still linear - bin x3 in software before you begin processing it further.
  16. Color shift is due to the fact that LP filters have quite a bit of gaps in their response curve. This throws off color balance and even prevents you from properly recording some colors (lowers camera gamut). Here is comparison between couple of LPS filters. Notice that R, G and B are not as much affected as for example - neutral grey/sand color of the background. This is because designers of filters tried to get primary colors properly balanced - but that is not the problem - problem is in mixed colors. You can do couple of tricks to restore this color balance - but they are quite a bit tricky. You need reference device (like a good DSLR or spectroscope) in order to derive color transform matrix that will give closest match to actual colors. Most people will not bother with that. Ha is very difficult color to properly render in astronomical images for several reasons. First - it is pure spectral color, meaning that it consists out of single wavelength. No display can produce that color. In fact - only way to really reproduce that color (or any other spectral color) - is to emit that exact wavelength. Any red that we use in our images will simply not be as saturated and deep red as Ha red is. Besides saturation - our vision tends to be less sensitive at that wavelength - and perceived brightness tends to be lower. If we want to render Ha wavelength next to regular red - and we assume the two are of the same intensity as light sources - we need to render Ha darker. It looks something like this: If left is like pure red from RGB palette - (1,0,0) - then Ha would be darker and deeper and richer (more saturated). There is very nice trick that you can use to see what Ha color looks like - just take your Ha filter and hold it against the light bulb (or any other source of light - well, except the Sun - it will still be too strong to look at) - you'll see world in monochromatic Ha light and everything will be Ha colored - that way you can get the sense of what Ha color looks like. You'll often see images like this trying to represent visible spectrum of the light (none of the colors is actual rainbow color as no display can produce accurate spectral colors - they are just close enough match - or in this case - not even that close). but this one is much better representation of spectrum (found on wikipedia): Here green is more vibrant, next to green you get that teal of OIII wavelength both violet and red fade to black because we loose sensitivity at ends of spectrum. You'll notice that Ha tends to be that deep rich red color - not bright flat red as in above spectrum.
  17. Not all narrow band has this. OIII often has higher FWHM than Ha. Atmospheric influence. Shorter wavelengths bend more than longer wavelengths and Ha is at red end of the spectrum. Often, lunar imagers use narrow band filters in Ha to lessen atmospheric influence. For this reason if usually creates very tight stars - simply not scattered around as much as shorter wavelengths.
  18. I'm for using LP filter in combination with regular filters. Depending on your optical train, there are several options. You can use LP filter instead of lum filter. I used to do this because I had 1.25" version of LP filter and had problem of using two 1.25" filters stacked. This prevents most of the gradients in luminance - but color data suffers from light pollution. You can use 2" LP filter on top of your regular imaging train (say first thing after focuser - like screwed into CC or similar) - and that is probably the best option in light polluted location - however, you need to properly manage color shift that results from using LP filter - otherwise you get very funny colors. Also, make sure that your LP filter blocks UV/IR part of spectrum as not all do and if you don't use luminance with refractors - it can bloat your stars.
  19. I've noticed that DrizzleIntegration is often used and I believe that one or more tutorials out there shown it being used and that is the reason why people use it - most just follow a good tutorial. However, as you have seen yourself - that actually hurts the image if used improperly.
  20. Indeed - mosaic that contains celestial pole will be "180° rotated"
  21. Similar thing happens when doing wide field shots. Here is example from Stellarium. When I put in Canon 750d (APS-C) and 85mm lens: I took group of stars - two of them in particular - when I align them to the left edge - line connecting them is angled one way with respect to vertical (top star is right of vertical and bottom is left). When I align same pair of stars to the right edge - angle changes - now top star is left and bottom is right. Whole frame "rotated". In fact - both frames are 0° - aligned with RA - but RA is not straight line it is circle on celestial sphere and as you track RA / circle - your frame rotates. This is with perfect polar align. This effect can also lead to panels being rotated one with respect to another - and they always are - it is the FOV that dictates if this will be seen or not (small FOV - very small angle of rotation).
  22. Main question is - did you find it easier to process this way and were you able to pull out more (better SNR)? As far as I can tell - it indeed goes a bit deeper - outer glow is more visible on second image, but I'd like to know your subjective feel while processing?
  23. Very nice. Stars are a bit odd shaped. Some of it might be down to guiding (how good was it?) and some of it is probably down to collimation. Maybe look into getting it improved? (I've found this to be useful technique when collimating: https://deepspaceplace.com/gso8rccollimate.php , mind you - you don't have to use Bahtinov mask, FWHM measurement also works)
  24. Yes indeed, CCDs should be binned at hardware level as that gives you improvement in read noise. With CMOS sensors, I'd say that it is better to do it after capture. In principle - doing it in firmware (camera software) and after capture gives you the same result - most of the time, but there are several pros and cons for each approach. - binning in firmware means less data captured as you cut to 1/4 (or more for higher bin sizes) data to be transferred over USB (or any other link type that connects the camera) and stored on your hard drive. It also means less data to process. This is pro to doing it in camera firmware / software - binning after capture gives you sort of flexibility. You can decide what bin factor to use for that particular dataset. That is pro for binning after capture - Sometimes binning in camera firmware can result in "data loss". Cameras that are for example 12 or 14 bit - you don't have to worry about that as binning will still give you 16bit data, but recent CMOS sensors started being 16bit ADC. If you add couple of 16bit numbers - you will exceed 16bit precision and if you download that data in 16bit format it will be a bit truncated. This does not happen with binning after capture as you can convert data to 32bit float before you start processing. That way you loose no precision. - You might like to try some advanced stuff like fractional binning and similar - again, for that it is better to bin your data after in processing rather than at capture time. In the end - it is tradeoff - would you like to do it to save storage space and achieve faster downloads (which are pretty fast with USB 3.0 as is), or would you like added flexibility of doing it after calibration.
  25. I'm rather interested to find out where you found the distance for 2MFGC 8778? That is such a nice galaxy to try to "eyeball" the distance. It looks like regular spiral galaxy of average size. As such - we could say it has about 120,000 Ly across (M31, NGC7331 sort of galaxy). Can we use that to get the distance? According to measurement from your first image, pixel scale is 1.834"/px and galaxy is 17px across. If we take 120,000Ly across for that galaxy - we get 793MLy distance - very close to 760MLy that you quoted. Science at work!
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.