Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. That is to be expected. Level of CA depends on both glass type and speed and aperture of the telescope. AR102xs is 4" F/4.5 scope - you simply can't get that sort of speed at 4" with good correction unless you throw like 5-6 optical elements. I'm sure that it indeed uses some sort of ED glass as simple achromat at F/4.5 and 4" would be much more colorful. It is good scope - if you need that sort of speed and you want to limit yourself to wide field viewing. Similarly - if you get ST102 - you'll get wide field scope that has CA and is light weight. If you want 4" ED scope that is good for visual and won't break the bank (it's not cheap but it is not as expensive as some models out there) - get this one: https://www.teleskop-express.de/shop/product_info.php/info/p4964_TS-Optics-ED-102-mm-f-7-Refractor-Telescope-with-2-5--R-P-focuser.html That scope is still not color free - you'll be able to see some CA at high power views and bright targets like Venus or bright stars. It is not quite grab'n'go either - it has 4kg and at least 60cm with dew shield retracted (I'm hoping it has retractable dew shield - it looks like it).
  2. Check this out: http://interferometrie.blogspot.com/2017/06/3-short-achromats-bresser-ar102xs.html
  3. It was rather long time ago (about 5 years) so I don't remember capture details, but I'll try to best describe conditions. I was using SW 130/900 newtonian on EQ2 mount with simple tracking motor (DC one with potentiometer). I remember having trouble of "dialing in" proper speed and I often had to issue corrections in speed in order to keep the planet in FOV (every few minutes would planet drift from 640x480 FOV and I needed to tweak tracking speed in order for that not to happen). I usually did 2-3 minute video per image back then (this has not changed much - I now do 5 minute ones for smaller scopes) and this animation contains 16 frames - but it is "forward then back" type, so really there might be total of 9 frames of forward motion. That is max of half an hour of imaging, and I was going to say - I probably did not spend more than hour or two on the whole thing. I'd say - about half an hour to 40 minutes of imaging.
  4. Very good point by @happy-kat You can get planetary imaging camera for fraction of the cost. This is how I did my first planetary images. I used Logitech C270. Just removed front lens and used 32mm PVC pipe sanded down to 1.25" as nosepiece. Only problem with web cameras is that they work on USB2.0 standard and because there is lot of data to be streamed for HD (1280x720) at 30fps - they always use compression. This is not good for planetary imaging and causes blur issues (and other artifacts). Here are some examples that I've done with Logitech C270 camera: And I here are some that I've done with ASI120 equivalent camera (QHY5IILc that I got shortly after) - same scope: You can see that modified web cam does have some issues with sharpness due to compression - but it can still achieve very good results and for fraction of the cost.
  5. Image of the moon taken at the same time from two different locations on the earth is probably going to be the same. In theory with pixel analysis you could be able to tell something. Let's do some math to see how much difference would there be. Moon is 384,400 km away and Earth has diameter of 12,740 km. Let's imagine that one photographer is at far east and other is at far west - so that distance between them is 12,740Km - not realistic scenario but it is upper bound - in reality, two photographers will certainly be closer. This means that angular separation of two photographers is 1.8988°, or in another words - on sees the Moon from 1.8988° different angle. Moon has circumference of 10,914.5 km, and on surface of the moon, angle of 1.8988° translates into 57.568 km. Say that photographs are high res and are shot at 1 arc second per pixel (much higher resolution than the image posted above). 1 arc second in that image (1 pixel) corresponds to 1.8636 km in that image. In the end we can calculate that feature in the center of the moon will be shifted by 57.568 / 1.8636 = ~31px. That is actually something that we could detect. Here is diagram with exaggerated angles to explain principle: If for example we have 3 times smaller image and 10 times less distance between photo sites 1 and 2 - then feature shift would be at most one pixel - and that is not something you can detect. For most part - you can't tell if image is shot from two different locations if it is shot at the same time under same conditions.
  6. vlaiv

    Alignment

    Ah I see. Yes, but that is different type of alignment process. Regular alignment process that you described aligns whole image in the way you described and often uses affine transform to do so and rarely does non linear correction in order to correct for lens distortion on wide field images. There is another type of alignment process used in planetary imaging that works differently. It uses alignment points. Each point in one frame is checked against number of neighboring points in other frame and correlation is calculated (match is performed). This creates list of "same" / matching points across the frames. True position is calculated as mean geometrical point across the frames and then each frame is aligned against that calculated reference frame (which is just list of geometrical positions where each alignment point should be placed). Then unwarping of each frame is done so that its alignment points match reference frame and then stacking is performed. So yes, you are right - here frames are warped or rather unwarped when aligning.
  7. AutoStakkert! 3 https://www.autostakkert.com/wp/download/
  8. It can work, but probably not as well as you would expect. With planetary imaging, exposures are kept really short to freeze the seeing (lucky imaging). This means that you don't need very precise tracking as short exposures will not create much motion blur from mismatched tracking and sidereal speeds. On the other side - that is extremely important for DSO astrophotography - since exposures are kept long, tracking needs to be perfect. People doing long exposure astrophotography use guiding for that reason - active system that tracks the star and issues corrections to mount position on any slight deviation detected. EQ platform will not be able to track that precise due to mechanical issues. It tracks perfectly for visual and good enough for planetary - where it is only important to keep the planet inside FOV of small camera (above few mm around optical axis that are diffraction limited / coma free). What you could do is try lucky DSO imaging. Some people had success with this approach, but it is demanding technique. It is done by taking thousands of images with short exposures - like 1-2s exposures. This will limit issues with poor tracking as there won't be enough time to develop significant motion blur (warped stars). In the end, you select the best frames - ones with round stars and stack those. Problem is that you need larger dedicated astro camera to do that - and that is again $$$. You could try with DSLR, but you would need to lock the mirror and attach camera to laptop and use external power supply. If you do it usual way - you won't have enough room on memory card, you'll quickly cause mechanical fatigue as you'll do several thousand shots each session and I'm not sure battery can handle more than few hundred exposures. Another thing that would be worth a try with EQ platform would be EEVA. I'm toying with idea to make a system where you attach your mobile phone to the eyepiece and then use laptop and stream series of short exposures from phone to laptop which you then stack and observe in near real time. Check out EEVA section of SGL for further info. Good thing with this approach is that you can save your "observation" and then do further processing to turn it into better looking image.
  9. For those maybe use AS!3 - it should load 8bit fits without any issues. It is after all planetary stacking software.
  10. You don't need goto to do it - you need a tracking mount. If you are skilled at DIY - you can make equatorial platform yourself rather easily. It even won't be much expensive. I'll find you some links to check out for that. https://photographingspace.com/diy-equatorial-platform-pictures/ https://www.skyatnightmagazine.com/advice/build-a-dobsonian-equatorial-platform/ These are just few first hits on google. You can find much more resources on the topic - just google DIY equatorial platform. If you don't want to DIY - you can purchase one, but these aren't cheap like DIY solutions. Check these out for example: https://www.teleskop-express.de/shop/product_info.php/info/p12059_Asterion-Ecliptica-Light-50---Platform-for-6--to-10--Dobsonians---47--to-53--Latitude.html
  11. vlaiv

    Alignment

    Can you be more specific - explain what you mean? It usually matches two images by having single reference image and other image that will be transformed to match reference image. Often you can choose what type of transform is "allowed". If you want to match two sections across the single image then you can do this with above method if you for example - make copies of selections of that single image and treat them as two separate images. If you want to do some sort of autocorrelation (finding transform of original image that would match it to itself in particular zone or something like that) - then I don't think you'll find such feature in general purpose software. Your best bet is that someone already made specific piece of software that does exactly what you want or to try to emulate your self with set of operations - or perhaps to write your own using macro language of some package (like Mathematica or Matlab or something like that).
  12. Try ImageJ / Fiji (which is just distribution of ImageJ loaded with plugins, but you won't need plugins for this). It opens / saves and will convert fits in bulk. This would be procedure: - File / open image sequence / select folder and then use name to filter only subs you want to work with that will open what is called a stack (just means bunch of images connected together that you can "browse" by using slider) - it won't stack them unless you request that in one form or another (maybe it is better if I do few print screens). - next, convert to 16 bit format: It might ask something like "do you want all images to ...." - just click yes ... - Last you do save image sequence: It will ask for name and format - choose fits and give some name. Use other options to specify numbering of saved images (like img_01.fits, img_02.fits, ....) And that is it.
  13. Does your dob have tracking (is it goto version)? If not, I'm afraid that you won't do much planetary imaging with it ... Even if you do have tracking mount - DSLR camera is not the best for planetary imaging. For good results with planetary imaging, you want very short exposures - like less than 10ms exposures and you want to a lot of them, and I mean a lot - tens of thousands of frames are quite common per single video. DSLR cameras have three problems: - Large pixel size - Video often limited to 30fps - Video is recorded using compression If you use regular DSLR with say 4.3µm pixel size - then Jupiter when largest in the sky will be only 60 pixels across. Your 8" f/6 telescope has diffraction limited field with radius of 2.4mm, so you can shoot planets only in this central zone. Outside of this zone scope will suffer from too much coma for planetary imaging. If you try drift method - where you let planet drift thru the FOV and you record it during that time - you will only have about 50 seconds of video. With 30fps - that makes 1500 frames - not nearly enough for serious image - not to mention that you will have to center planet and let it drift over the center of the FOV with your camera attached (not impossible but not quite walk in the park either). In reality you want dedicated planetary camera and lap top and of course tracking mount. This is good starter planetary camera: https://www.firstlightoptics.com/zwo-cameras/zwo-asi120mc-s-usb-3-colour-camera.html You'll also need barlow - about x2.5 so get x2 barlow and dial in sensor distance. You'll need laptop with SSD and USB3.0 (well, you can still work with laptop with regular HDD and USB2.0 - its just that these are standard these days and they will improve things). If you don't have tracking - look at equatorial platform for dobsonian telescopes.
  14. In principle yep, that is what I'm saying. 8bit mode is just 16bit mode with removed least significant bits (lower 8bits of those 16) for faster transfer and less storage space. Since you used really high gain - you could recover something from the data - but you need to convert it to 16 bit first in order for DSS to accept it. However, I would not hold my breath since a lot of data has been just simply chopped off. You can use 8bit when doing planetary / lunar. There signal is so strong that loosing 8 lower bits is just loosing noise - most of the signal won't be affected because it is very strong. With faint stuff - most of the signal is in dark parts of the image and you discarded those dark parts of the image and that will impact the result. Like I said - you can try to see what will come out of this data - but do keep in mind that data has probably been ruined by using 8 bit format and don't feel too bad if there is not much to look at after stacking.
  15. Hope you did not spend too much time on this? Here are two tips: - don't shoot in 8bit mode, shoot in 16bit mode always - don't use SharpCap for deep sky imaging. Although from exposure length I would say this is more of EEVA style session? Then it is ok to use SharpCap, but for more serious work - use at least ASCOM drivers in SharpCap and preferably different long exposure capture application (NINA seems nice).
  16. I don't think that separate forum will overcome ~£5000 worth of trouble of trying NV for most people.
  17. It is best to ignore it. Really. It is the feature of the scope - it is not very intrusive. There are couple of things you can do to minimize it but each of them comes at expense - either monetary expense or imaging time expense (more imaging time for same result). 1. Get mono camera + RGB filters This comes with monetary expense as mono cameras and filters are more expensive than OSC cameras. With One Shot Color cameras - you only focus once for all wavelength and fact that not all wavelengths of light come to same focus is most visible in this case. With mono + RGB filters - you can focus (and you often need to focus) for each filter separately. This means that you can find best focus position for blue wavelengths, best focus position for green part of the spectrum and best positions for red part of spectrum. Any error due to chromatism will be reduced as focusing error will be smaller 2. Get special luminance type filter - this helps with both mono + filters and with OSC camera. Astronomik recognized the fact that not all refractors are well corrected in furthest parts of spectrum and they created L1-L3 filters that progressively shorten imaging spectrum. Using L3 cuts off most offending parts of spectrum that cause the most star bloat. This method costs both money (for this filter) and time - as it reduces amount of light that sensor captures - lowers signal in SNR equation (but for tiny amount like less than 5% - and if you want the same result, you need to image %5 longer time - that is not much). 3. Get mono camera and narrowband filters. This is not replacement technique, but I mention it as many people do it for shooting nebulae and this completely eliminates issues with chromatic aberration as it passes only one wavelength at the time (or rather very small range of close wavelengths that all focus at the same distance). This costs the most money for camera and filters but actually speeds things up for certain targets especially in presence light pollution. 4. Get different kind of filter: - wratten yellow #8 - baader long pass 495nm or similar This costs some money but costs more imaging time due to light cut off. It also skews color balance and you need to correct that in post processing. 5. Aperture mask - this costs the least in monetary terms - simple piece of cardboard will do. It costs quite a bit in terms of imaging time - but works surprisingly well. This image was taken with ST102 - which is F/5 achromatic refractor. It has crazy amount of chromatic aberration - even for visual, let alone for imaging. Above is combination of yellow #8 filter and 66mm aperture mask. Almost no purple halos are visible in the image. But really - out of all of these, I would just recommend mono camera + filters / nb filters or getting L3 astronomik filter. Or simply choose not to be bothered at all, as it really is modest level of chromatic aberration - that most people won't see and there are processing techniques that you can use to hide it in image.
  18. I'm failing to see connection - unless you are implying that ring material was originally part of the planet itself and got ejected from different places?
  19. This is the reason for larger errors in the graph. PHD2 does not use guide scope focal length for guiding, but it does use it to report error in guiding. Error was reported to be x4 as bad as it really is because you usually use 242mm of focal length and now you used 805mm - more than x4 longer focal length and for same pixel size - more than x4 resolution in arc seconds per pixel. Error is reported in arc seconds (on the graph and in the stats, although you can choose which one you prefer - in pixels or in arc seconds) as that is comparable between different setups (not all have the same pixel size and focal length).
  20. Lines are getting blurrier every day. Many people getting into astronomy come with idea that they want to do it all - they want to observe but also to snap a few pictures, maybe record a video. They often live in big cities with lots of light pollution. One solution to above equation is visual setup capable of EEVA on some level, maybe just smart phone connected to eyepiece. Once people get into astronomy without preconditioning (I'm visual astronomer or I'm imager, or I doing NV), they will be quite open to all styles and won't even think that there is "strong" distinction. For their benefit it would be nice to have place to discuss it all.
  21. It is actually not the issue with camera - but rather with display device. Our displays are 8bit and often even not that high - some are 6bit displays. This means that at best, difference between brightest star and one that is the least bright that display can show is about 1:256, or if we convert that to magnitudes - ~ 6mags of difference. At the eyepiece - you can easily see 10 or more magnitudes of difference (12mag star next to 2mag star). That is the part of the problem.
  22. This would be true if orbits were like this: But inclined orbits are like this: It still orbits along the equator - just not equator that is along axis of rotation of the planet.
  23. In my scope it looks more like this: But you see the problem - my scope is too far small to be able to show full extent of nebulosity
  24. I have no idea, and generally know a little about the topic, but here is quote from the Wikipedia that might offer some clue: To me, this gives a clue that there will be difference if ring material is mostly formed from material in solar system versus material of planet itself and that tidal forces of the Sun and the planet have impact on orbit of material. It could be that it has something to do with shape of Saturn. It is not perfect sphere because it rotates. It could be that gravitational pull is not the same in orbit that is perpendicular to axis of rotation of the Saturn and those inclined to it.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.