Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Yes, simplest way would be to image 3 different star fields - one due east, one due south and one due west at about 45° altitude. That will let gravity act on camera / reducer combo from three different directions and then you can see if coma changes position in the image. If it stays in the same part of image (say left side) - then you ruled out gravity, but there still could be focuser tilt. Focuser tilt can be tested by rotating camera by 180° or 90° and shooting same part of the sky. Coma should do the same in opposite direction - change position in the image either 180° or 90° (depending how much you rotated camera).
  2. Depends what you want to achieve. I would avoid super pixel all together if possible. There is better way to do that - one can split image into four subs - red sub, two green subs and one blue sub. Then you take each "color" and stack as mono subs. That is the simplest and best approach to avoid slight channel offset that super pixel mode introduces. Unfortunately, I'm not sure if this approach is readily available in software, but that is "cleanest" way to treat OSC data - it will give you exact sampling rate for each channel and if you align all channels when stacking - it will fix any offset issues. On the other hand, binning after stacking will work "properly" only if subs are dithered. If they are aligned - well, you won't get expected SNR improvement as 3/4 of red and blue subs are in fact linearly dependent on other pixels and 1/2 of green pixels as well (debayering fills in missing pixels by interpolation of existing pixels - and noise in missing pixels is no longer random - it is product of adjacent pixels). Main problem is that people think OSC camera has same resolution / pixel count as equivalent mono camera and same pixel count is often quoted. For example ASI2600mc is 6248px x 4176px but in reality you only have 3124 x 2088 red and blue pixels in color version. I think that it is best to view color sensor for what they really are and skip debayering all together - they are "interspersed" red, blue and two green sensors each having 1/4 of original pixel count and corresponding resolution.
  3. In this particular case, it would not if only Ha is extracted from L-eNhance and L-eXtreme filters. If you want to extract both OIII and Ha data, then you would have problem with L-eNhance as it can't split Hb and OIII, so you would end up using both at the same time as single signal. Color of Hb and OIII is slightly different and this can have impact if you try to simulate true color (I say simulate because computer screens are not capable of showing pure spectral colors like Ha, OIII or Hb).
  4. Hi and welcome to SGL. Although Bresser is German vendor - it sources its scopes from far east, so telescope itself is likely to be made in China. I don't think that you'll see drastic difference in quality of the views between the two - if any really. On average, I'd say that GSO will have very slight edge in quality of optics (based on my own experience with GSO RC scope), but sample to sample variation means that chance plays a part and if you pick any two scopes - one from Bresser and other from GSO - there is no telling which one will have slightly better mirror figure. Largest difference between two scopes is focuser and the way scope is mounted. Bresser design allows for easy OTA detachment and use on another mount - something that might be interesting to you - but given that you are primarily visual observer - it is unlikely (very good feature for someone wishing to do both visual and for example planetary imaging with such scope and suitable tracking mount). Bresser focuser seems to be single speed unit. It is large and uses rack and pinion which is good for stability - but not necessary for visual (helps with imaging as focuser is less likely to slip). GSO focuser is dual speed - and that is a plus in my view. Fine focusing really helps when observing planets for example. I added fine focusing unit to my 8" F/6 Dob and I imagine it will be even more useful on F/4.7 scope. Although it is crayford style focuser - for visual I don't think you'll ever experience slippage or anything like that. I know that I haven't with my 8" dob.
  5. I can explain complete theory behind proper mixing of such data, and give you the math behind it, but not sure if you'll be willing to give it a go as it requires quite a bit of manual work to get things done - as it is not implemented in software as simple set of actions. In nutshell, you need to convert both of your data sets to XYZ color space (RGB requires deriving transform matrix and for L-eXtreme we will do math model as it is narrow band), do processing and mixing in XYZ color space and then convert it back to RGB color space using some clever color substitution techniques. This is of course way to get as accurate data blend as possible and closest colors to true colors as possible. You will not get Ha blend like people get when trying to emphasize Ha regions in galaxies for example - as that is sort of artificial coloring (emphasizing color to accentuate features). If you want to replicate that sort of look - like people usually do with Ha data - like this: then it is fairly simple procedure: Stack and process your RGB data normally and stack your L-eXtreme data and extract red channel only - that will be your Ha data. Stretch your Ha data so that Ha structures show nice and bright. Create starless version. Blend as new layer with "brighter" mode set (which means that it will blend Ha data if it is brighter than underlying data). In fact - experiment with blend modes - one will give you nice blend if you boosted Ha layer properly.
  6. Fact that you have collimated scope without gear on it and collimating the scope with imaging gear on it makes things worse, and that you have coma issues on one side of the image to me suggests that the problem might be some sort of sag because of the weight of imaging gear. Is your primary mirror in any way attached to rear port of the scope? Can adding weight to the back port somehow tilt mirror a bit? In lighter RC designs that I saw - this is the case and if primary lock screws are not tight - moving scope across the sky can impact collimation as gravity acts on imaging gear in different directions.
  7. Simplest method to do it is after stacking. You take your linear image, prior to any processing steps and choose PI option - "Integer resample". https://pixinsight.com/doc/tools/IntegerResample/IntegerResample.html Mode should be down sample, resampling factor is bin factor - set it to 2 if you want to bin 2x2, or 3 for 3x3 and so on ... Down sample mode should be set to average (or sum if you want to actually sum pixel values to measure things like photometry, not really important for image processing). Super pixel mode is slightly different than bin x2 after stacking. Very slight difference in result, and general difference in method. As far as image size and sampling rate goes - they will produce same result. As far as SNR and pixel-to-pixel correlation, channel alignment and few other things - they will differ. Do you know why you would want to do that? Do you have any particular reason to do so?
  8. I would recommend against using drizzle. You'll be using 71mm aperture scope. Such scope under regular imaging conditions (decent mount, average seeing, etc ...) will generate image with ~3.2" FWHM stars (that is 0.8" RMS guiding and 2" seeing). Optimum sampling rate for such sharpness of the image is about 2"/px. With 2.7"/px you are already quite close to 2"/px and you don't need to drizzle - it will not provide you with additional detail. (In fact, I don't think that drizzle works at all in amateur setups, but that is another topic). In any case - neither drizzle nor binning will change size of target with respect to FOV. Only cropping / mosaicing or use of reducers / focal extenders will change FOV and hence ratio of target size to FOV (as target size remains the same). If you want to make your target larger with respect to FOV - you have two options: 1. Shoot at current resolution and then crop your FOV 2. Use focal extender (like barlow or telecentric lens). Second option also changes your pixel scale and that is something you might or might not like - so do take it into consideration.
  9. No, binning will not do what you want. Like @Nik271 explained - binning works to reduce number of pixels image is composed of and in process increases SNR of that image at expense of detail. In some cases no detail will be lost - if your image is over sampled. What you want in your case is to crop image. First, let's address hardware vs software binning. Hardware binning is only available on CCD sensors that support it. There is very little difference between two methods and it concerns read noise. Everything else is the same - except for the fact that software binning can be performed after you gather data - in processing, which is good as you can decide to bin based on actual level of detail achieved. You are imaging with ASI294mc and your only option is to do software binning. You can choose to do it in camera firmware (at capture time) - but I would advise against it as there is no real benefit in doing so (except for smaller image files). You can decide to bin your data in software if you want to improve SNR or conclude that no detail is present for bin x1 to be used (which is often case with today's cameras with small pixels). In any case, I would advise you to take image at normal resolution, after stacking decide if you want to bin or not and then create two versions of the image: 1. Full FOV. This will show target at proper size if zoomed in at 100% zoom level 2. Cropped version - that will show target properly without need to zoom in at 100%
  10. I guess that you should have no issues with FPS then. You can always try "on bench" speed. Select smaller ROI - like 640x480, switch to RAW8 format, turn on Turbo USB / High speed and see what sort of FPS you are getting in SharpCap (provided you are using it).
  11. Yes, solid state disks - much faster data transfer speeds than regular hard drives with spinning platters (which now seem so "stone age" in computer terms - magnetic spinning discs ).
  12. Aperture is important in astrophotography, but it is not only part of the equation. 40, 50, 60mm of aperture can quite happily image celestial targets, provided you take care of other part of equation. Speed of imaging setup depends on "Aperture at resolution". If you think about it - coming from visual - if you use same telescope, hence same aperture, but you raise magnification - target will become dimmer. This is because same amount of light is spread over large surface. Similarly, speed at which image is taken (how much time you need to spend imaging your target) will depend on aperture size, but also - on working resolution. Working resolution depends on focal length and pixel size, and can be changed by using binning (which is combining multiple pixel into single pixel - like group of 2x2 or 3x3 pixels to act as single pixel having x4 or x9 effective light gathering surface). Chromatism can't be removed in post processing effectively - but there are tricks to lessen it in capture time. 1. Use of aperture mask 2. Use of filters that remove specific parts of spectrum. Field curvature is something that depends on focal length of telescope, and larger telescopes have less of it. There are also field flatteners that deal with it.
  13. Maybe if you lower number of stacked frames? Try stacking something like 1-5% to see what sort of image you get. Also, if you are using AS!3 - play with size of alignment points. Make them smaller. I have a feeling that more can be pulled out of that recording.
  14. Really depends on what you want to image. Lens already have good focusing mechanism and direct attachment to DSLR (provided you get lens for camera model you have). Their downside is that they are not diffraction limited, often suffer CA and other aberrations wide open and they come in short focal lengths. Achros on the other hand - have CA issues, their stock focusers are not up to task of holding DSLR (weight issues) and require much larger mount. They are however diffraction limited optics and are meant for astronomy. Most people don't like to think of using Achros for astronomy due to chromatic aberration - but CA is really not that big issue compared to lens. Let me show you this on example. Here is an image that I took with SkyWatcher StarTravel 102mm F/5 scope: (you can barely make out Crescent nebula in that image). This image is actually 1280x1024 in size (or there about) - but I resized it down to show you what 100mm lens would create with above camera (which was really small planetary camera). CA is barely noticeable in this configuration. Using same telescope with DSLR would give you this FOV: On the other hand, if you know few tricks - you can take achromat scope and remove chromatic aberration from it completely for a bit higher resolution work. This image was done by using aperture mask and wratten #8 yellow filter: (again, using small planetary type camera - ASI185 this time). It is 2x2 mosaic As comparison, with similar sized sensor of ASI178, and Samyang 85mm F/1.4, you can get image like this: This was taken wide open - at F/1.4 and it shows - stars are "blobs" rather than points and some have very pronounced halo: Here is crop from color version of that image (but not nearly as stretched as above version): Both red and blue / violet halo shows around brighter stars. Lens like Samyang 85mm or 135mm will outperform any achromat scope - but you'll be limited to that focal length. Both lens need to be used stopped down at say F/2.8 to give best results, although you can use them wide open if you want to do very wide mosaics and are prepared to bin your data / lower resolution, or you use filters and mono camera. Telescopes will have better performance on same targets and same resolution, but they inherently have longer focal length and you definitively need to use mosaics and binning with them to match FOV and resolution of the lens. You can shoot higher resolution images with them, but then you need to be prepared to use some tricks - like using filters and further stopping down aperture and so on. I guess that in order to decide what is best for you - lens or scope, you need to answer few questions first: 1. What mount will you be using for imaging (this is really important bit) 2. What type of targets will you be interested in (wide field low resolution work, or narrower field medium resolution work - high resolution work is really not for beginners - at least not long exposure, planetary is fine).
  15. 40000 frames is feasible only with USB 3.0 cameras where you can achieve 150-200fps. You also need SSD, or enough RAM for buffering purposes. I used to image with very low spec laptop - but it had SSD and USB 3.0 and I was able to get very good FPS rates - close to those advertised as max for given camera (ASI185mc / ASI178mc). I limit my runs to about 4-5minutes. It really depends on aperture size and software that I'm using. AS!3 can deal with quite a bit of rotation of the planet because of the way it works. You can easily calculate largest drift of features, depending on planet rotation speed, distance and working resolution. In most cases - drift over few minutes is order of magnitude of pixel size - so maybe 2-3 pixels. Frame to frame distortion will move features more than that (sometimes even 1"-2", so several pixels) and AS!3 is able to undo distortion by using alignment points and warping back the image (effectively solving issue of tilt component of wavefront aberration over the image - a bit like rudimentary adaptive optics). That also solves any small rotational issue. Reason this does not work when doing LRGB imaging is because actual feature position is calculated from reference frame. Reference frame is made so that alignment points are placed on their "average position" - rationale being that seeing will move point randomly around its true location. If you image each channel for few minutes - well they will have different reference frame due to planet rotation. Best way to deal with that is to use WinJupos and derotate color frames to match that of luminance. By the way, I think that hardware upgrade to be able to exploit USB 3.0 speeds is well worth it. If you keep everything else the same, but you capture x9 more subs with USB 3.0 system (36000 vs 3000 frames) - you'll be effectively improving your SNR by factor of 3. That enables you to sharpen more aggressively and get more detailed image.
  16. That is actually about right. Not sure why you have image like you have there. It should be better looking. It was either exceptionally poor seeing, or something was not quite right. What did you use for stacking? How many frames did you end up using, What is the size of your alignment point (if you used AS!3 - which you should use btw ). All of those things can make difference on quality of your final image.
  17. Indeed, sorry about that, I just "browsed" thru the video and did not realize that you have reverse osmosis device installed in your house. That is perfectly fine as well, and main point is of course to let the people know that hard water can leave stains on the mirrors and they should try to use either distilled, demineralized or like in your case - filtered by reverse osmosis.
  18. Final rinse should be done with distilled water in case your tap water is too hard. Too hard water will leave residue after drying off.
  19. Any scope will easy handle x3 barlow - question is, is there a point. ASI224 has 3.75µm pixel size and corresponding F/ratio for critical sampling is F/15 (calculated at 510nm). No need to go higher than that - it only spreads light more and reduces SNR for given total imaging time. No additional detail will be captured.
  20. Next step would be to add x2 barlow to your setup and tweak capture parameters. 2000 frames is very low count - you want x20 of that - about 40000. Set your exposure length at about 5-6ms. Set your gain at 379. Use 8bit mode to do capture. Make sure your camera is connected via USB3.0 connection and your computer has SSD to store the video. Use SER video format. Don't worry about histogram being "low" or anything like that. It is not important in planetary imaging. Take at least 3 or 4 minutes of video (say you manage 150FPS and you image for 4 minutes - that is 240s - total number of frames should be around 36000). Stack top 5-10% of that. If you can - take calibration frames as well (well calibration movies). Take something like 100+ dark subs video and do at least that much for flat frames (and flat dark). Use Pipp to pre process your video and keep bayer matrix intact. AS!3 will use bayer drizzle to extract color information.
  21. When using high gain and short exposures - you can use 8bit mode without any issues - it will give you even higher frame rates.
  22. Provided that flat panel has nice uniform spectrum and is generally white in color (color of light). I'm not 100% sure about LED panels though. They seem to have "lump" in blue part of spectrum: But even with this lump, if light that panel gives is white - it should give you lower ADU reading in blue than in green if your camera has common QE curve - like this: If it peaks in 500-600 range and is lower in 400-500 range, then blue will definitively be lower in ADU value.
  23. Yes it would. Also - just imaging white object will do the same - green will be stronger than blue - because QE is less in blue part of spectrum than it is in green.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.