Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,026
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. I second above recommendation, but do have question for you: When you say that your scope is limited for imaging smaller galaxies - what exactly do you mean? If you are referring to resolution - ASI1600 (from your sig) coupled with ED80 at native focal length will provide you with 1.3"/px. With Heq5 it is very unlikely that you will want to go any lower than that regardless of the scope that sits on it. For example, if you choose to go with above scope (and I think that you should) - you will be working on native 0.57"/px with ASI1600 - that is too much, and you will want to bin in software by factor of at least x2. That will give you resolution of 1.14"/px - which is still too much I think. Now if you don't have enough light grasp to go for fainter things, they yes - going with 6" scope will give you more light gathering capability. Just don't forget to adjust your sampling rate to something reasonable (1.2"-1.5" for high power work with 6" scope and heq5).
  2. I went for ASI, couple of times and I'm happy so far. My first dedicated astro camera was QHY and it was planetary type. It worked but I had issues with drivers and it was quirky. I replaced it with ASI185 and everything just worked, so I was rather happy about it - or to put it differently - that is the way device should to work - without any issues. Next I got myself ASI178mcc (cooled version) - and that one worked well. There were some issues, but nothing to do with camera - poor USB 3.0 cable was the reason and camera experienced random disconnects / reconnects. I replaced USB 3.0 cable and added good USB 3.0 hub and haven't had any issues since. Same thing happened with my ASI1600mmc - USB disconnects/reconnects - and it was more severe than with ASI178mcc, but again it was all resolved when I replaced USB cables and added hub. Have no experience with Altair cameras, and at the time I was making my decision about cameras - I went for reliability and driver stability so I've chosen ASI over QHY back then. HTH
  3. Well, there is only so much a man can do with limited 16bit data There is a severe issue with OIII and SII subs at the edges for some reason (left/right edge) which I could not get rid of - hence bluish / purplish left and right side of the image.
  4. I'm missing something here? You uploaded something different? It's been cropped differently and possibly already stretched? Look at that star profile: Not proper thing, is it?
  5. Sure I do I'm not saying I'll provide any meaningful results, but fiddling around with data is much fun
  6. Why do you use 16 bit format? You are loosing a lot of precision that way. Let me show you something. Look at histogram above and my black/white point settings - they are set to 40 and 270 - there is less than 256 levels in there and almost full dynamic of the image in this range. This means that you in reality have 8 bit of data for your nebulosity because you are using fixed point format - 16 bit. With fixed point format - all pixel values must fall in that range and must be aligned to precision supported by that format. In floating point format - every pixel is free to have it's own range (hence floating point) and you won't run out of number of bits any time soon (precision is something like 10^15 or so). Could you do another upload - but this time just save image in 32bit format (fits preferably) and don't even do deconvolution - just bare registration (alignment) and stacking.
  7. Yes stars were probably very hard to control because of ED80 being a doublet, but image is very nice regardless of that.
  8. Yes, you are quite right with that designation. Getting stellarium on your computer can be great help with getting to know the skies. Here is screen shot from stellarium with items you have marked: Andromeda constellation on the other hand is real easy - at least part of it - once you learn to identify those parts - rest will be easy, but you do need to wait for summer for best view of it - now is pretty low. You start by finding Cassiopeia - W shaped, very easy to spot - you have correctly marked it in your image - just below it - when it looks like W rather than M - you will spot 4 stars almost in line (very slight curve) - all of these stars are bright and easy to spot and sit right below W. You have named those stars correctly in your image as well - I put little arrows to those stars in my second diagram from Stellarium. Going left to right - first star is in Perseus and is not part of Andromeda Constellation. Next three stars are in Andromeda Constellation. Above in diagram Andromeda is joined with Pegasus constellation - because they share star. Forth star in a row (third in Andromeda) is actually alpha Andromedae but also delta Pegasi. I added other lines not mapped out in Stellarium to see what other stars are also in Andromeda Constellation. Finding Andromeda galaxy / M31 is super easy when you learn to find these four stars below Cassiopeia. Here is diagram and explanation: I have marked 4 stars of interest - you need to identify third and "work up" towards Cassiopeia from it (let line connecting these stars be base line). Very small distance away you will find one faint star (marked with arrow) - then if you continue up about same distance as base line to this star - are two more stars (also marked) - together these four stars create sort of Y sign / or "cocktail glass" shape. M31 galaxy is "umbrella" in this cocktail glass Once you have found Y and possible location of M31 (it might be challenging to see it even with averted vision if you have poor eyesight or have heavy light pollution) - M33 is just on the other side of "baseline" (marked as well) - about same distance away.
  9. Not sure about tripod. Some people use it, but I've tried it when my Skymax102 arrived and did not quite like it. I tried it on very basic photo tripod with ball head. Dovetail has 1/4 thread on it and you can attach it like camera - but you loose ability to balance it that way. I've seen some people use ball head for scope having thing tilted at 90 degrees like this: Rest of the images here: Although I've heard that sky watcher latest tripod is not really "heavy duty quality", it might be interesting option: https://www.firstlightoptics.com/sky-watcher-az-eq-avant/sky-watcher-skymax-102-az-eq-avant.html but do be careful with latest SW bundled scopes - it looks like they are missing collimation screws at the back (silly cost saving?), so it might be better option to buy ota and mount separately. FLO does not seem to stock AZ-EQ avant mount by itself, but it is available: https://www.teleskop-express.de/shop/product_info.php/info/p11298_Skywatcher-Avant-Mount---altazimuthaly-and-equatorialy-usable---capacity-up-to-3-kg.html and you can later even buy tracking motor for it (FLO stocks it). I decided to go with this combination - and I'm very happy with it: I'll probably add wedge and CW at some point for really portable wide field AP / EEVA rig.
  10. Looks like we have confusion here, or is it just me? Some seem to talk about captions on the image itself, while others talk about information supplied with the image - like in textual form, posted here on SGL or maybe on blog/website where the work is kept and presented to public. As for captions on the image - sometimes I find it useful to have some additional data, but not what has been mentioned here. Object name, some astronomical info on the object is fine if it does not detract from the image but enhances things. A few markers if image shows something that is otherwise hard to spot. Pixel scale, RA/DEC orientation and coordinates is also not a bad thing and will not take up much space. Calibration bars that might be useful - like if particular false color scheme is used, or some feature of object is coded in colors use (magnitudes instead of brightness or whatever). Otherwise, I would leave image to tell the story. Additional information that really is useful for comparison and general knowledge - like gear, exposure details, software used maybe even processing steps or at least general outline of processing (for example LRGB - RGB ratio color composing + stellar color calibration, or similar) should be certainly included in my view - but as accompanying text rather than being the part of the image. Some people like to include general text / some info on target - like distance, size, age, classification, peculiarities - I like that also, even if it is excerpt from Wiki page on that object.
  11. I guess it would be simple Open. I don't remember ever using Open as layers
  12. Depends really on telescope that you are using. Something like 70-80mm of aperture in regular seeing - and you are right where you want to be with ~2"/px. 2.23"/px is ideal sampling rate for FWHM of about 3.5". 80mm of aperture has airy disk diameter of 3.21" - add to that a bit of guiding error and a bit of seeing and you can easily get to 3.5" FWHM (FWHM corresponding to airy disk diameter of 3.21" is less than 2" but again - add those influences and you are easily back over 3"). Don't know what sort of spot diagram has TAK Epsilon180 - I would not be surprised that it is perfect given that it is TAK after all, but that scope is fast wide field astrograph and less than perfect spot diagram will certainly be acceptable as such scope is used for wide field and inherently low sampling rate. When you think about it, what would you say that you would consider to be wide field image? I would put it somewhere above 2-3 degrees of FOV, right? Now couple that with modern sensor that has 4000-5000 pixels in width - what sort of resolution can you expect with such combination? 2-3 degrees = 120-180 arc minutes = 7200" - 10800" now we divide that with 4000-5000 pixels and we get 7200/4000 = 1.8" and 10800/5000 = 2.16" So you see 2"/px is start of wide field one way or another (meaning FOV larger than 2" and sensor in 4/3 and upward size and decent pixel count). For that reason I would say that 2.23" is very decent wide field sampling rate - even on lower side of things - if you want really wide field, like to fit whole m31 or similar, you need at least 3-4 degrees and that means almost double what we've calculated to be feasible sampling rate - 3"-4"/px is perfectly fine in those cases (that or use of APS-C / full frame sensors with enormous pixel counts - 6000-8000px in width).
  13. 2.23"/px is very good working resolution for wide field setup. 0.93"/px is indeed too high resolution, even for 10" scope. I would say that you need to find a sensor that will keep current FOV or expand it and improve things on read noise and QE "front" in order for that to be viable swap. Alternative would be to just fix newtonian in terms of sampling rate to make it better suited. This could probably be less costly affair than going for different camera. You have mentioned 3.69um pixels, are you looking at ICX814 sensor? You will be loosing too much of FOV with 16mm diagonal. Do have a look at this instead: https://www.teleskop-express.de/shop/product_info.php/info/p4685_ASA-2-inch-Newton-Coma-Corrector-and-0-73x-Reducer-for-Astrophotography.html that would make 1.28"/px - much better sampling rate for high resolution work.
  14. Just checked - Indeed, I myself was using 25 offset back in 2017 and have changed that to 64 in 2018. I do remember something about drivers update, but have no idea if it was related: Here are some of my backed up files: just a few months later: Once again, I'm sorry to have caused confusion.
  15. Indeed, again, sorry for that - I remember now, there has been drivers change in the mean time (and couple more since) and indeed I've re done my dark after that with new settings.
  16. Yes, by all means upgrade to ASI224 if you can - that is very good planetary camera. Make sure your laptop can support it - has USB 3.0 port and your hard drive is fast enough. I also favor OSC type imaging for planets - due to time involved with switching filters and refocusing - I see no benefit in using LRGB approach. When I first started in planetary imaging, I also used modified web camera, but issue with such devices is that they compress video - which is bad. You want raw video. When I switched to my first planetary camera - QHY5LIIc - there was massive difference in quality of results. For example, this is Jupiter with modified web camera: Same scope as above - 130mm SW F/7. Image is just too soft due to compression artifacts. Second Jupiter above was taken with ASI178mcc (cooled version) - which I got for mix of things, some planetary imaging, some EEVA and a bit of regular imaging - hence cooled version. You don't need cooling for planetary imaging, so don't worry about that.
  17. I don't do much planetary imaging lately, but I do understand principles of it and will be happy to answer any questions that are within my reach. With regards to SharpCap - I used to use both SharpCap and FireCapture when I started, but have used SharpCap only for quite some time now. I don't use it much since I've not done nuch planetary or EEVA lately, but will do in coming months since I have now small a dedicated lunar/planetary scope that I'll be messing around with imaging. Don't have much in terms of good images, maybe just couple to show of, like these: All above were taken with 130mm Skywatcher F/7 (900mm) newtonian scope on Eq2 mount. Couple years ago, I gave my RC 8" scope a go for planetary, but due to largish central obstruction it's not really well suited for that:
  18. You should switch to AS!2/AS!3 for stacking purposes instead of Registax, it's better tool for this purpose. Registax is still used by majority of imagers for wavelet sharpening (there is also Astra Image - it is commercial software but has demo version available) and some processing (RGB align / white balance and such). Pipp is used as pre processing tool - it allows you to calibrate your recordings (flat / dark calibration) - convert between formats, select good frames, do basic alignment and such. De-rotation is procedure used to adjust longer video capture of planets that rotate too fast - namely Jupiter. If your video is too long and you leave it as is and stack it - you will get motion blur due to rotation of the planet. De-rotation makes sure that does not happen. It is worth doing if your video is particularly long for given planet - in case of Jupiter if you record for longer than 5-6 minutes. It also depends on your sampling resolution (pixel scale, or arcsec/px). When working on higher resolution (larger aperture) you are more likely to see the effect, or rather it will take less time for it to happen. Btw, due to way AS!2/3 works - it can take care of small rotation issues and will not produce motion blur if you "overstep" video duration by some degree. In principle if you limit your total imaging time to 5 minutes, you won't need de-rotation in most cases (this means total shooting time, and not total video time. If you do LRGB with mono camera - you need to be done shooting all filters in that time period - if not, you will need to de-rotate).
  19. I'm sorry if I mislead you for those parameters, but in all honesty, I don't recall recommending using 25 as offset. I did recommend unity gain on more than one occasion (pretty much every time gain of ASI1600 popped up in discussion ) but really can't recall offset of 25 for this camera. As far as I remember, I did say couple times that offset of 50 should be fine and that I use offset 64 which might be too much but won't produce ill effects and is on safe side. I also gave instructions on several occasions on how one should determine proper offset for their camera - exactly as I described above - you shoot bias subs - small number of them (dozen or so is enough) and stack using minimum (you want to see what is minimum value of sub minima) and inspect for minimum value. If it is equal to minimum value camera can record (usually 0 or 1 scaled with number of bits - in ASI1600 case it is 1 but since camera records 12 bit format and stores it in 16bit format - scaling factor is 2^(16-12) = 16, so minimum value will be 1*16 = 16).
  20. Might be that definition is related to planet formation period not later captures. It also probably refers to independent bodies in same orbit rather than satellites (although not sure if L-point objects can indeed be called satellites?).
  21. Here is the best I was able to pull out of this data: Hope you like it. I'll do another step by step tutorial later to explain how I managed to process the image like this. It's going to be a bit more involved tutorial - it's a bit "advanced" stuff as it includes another piece of software (also free as gimp) and plugin that I wrote for it (background removal).
  22. Like @wimvb said, unlikely - small finder will cool down fast, and I believe that tube currents will cause effects similar to astigmatism - which will cause soft image for planetary, rather than causing guide star to jump around.
  23. Lum is much more than sum of R, G and B. For example - if you were to create synthetic lum, and you took 1h of R, 1h of B and 1h of G and you stack those together - you will get poorer result than doing 1h of lum. Difference will be in read noise (if there were no read noise 1h of each R, G and B summed would be the same as single hour of Lum). This means that 3h total of imaging will give you poorer result than single hour of lum. Lum is good, otherwise people would not use it. That is bad - it means that your offset is too low. You want none of your subs to have any value equal to 16. There is nothing you can do now to fix your data - but you can increase offset for future use. Make it larger than 50 or so (I use 64).
  24. Actually there is much more to this data if processed properly. Due to original stack having only limited amount of data, but being rather large pixel count (6000 x 4000) and due to fact that stars are not quite good - we can safely bin that to regular size. I've chosen bin x4 for picture size of 1500 x 1000 - which I feel is good representation of the image. Here is green channel extracted and treated as luminance layer (green in cameras is modeled as human vision sensitivity to brightness - so in principle it is ok to use it as luminance - add to that fact that sensor is twice as sensitive in green than in red or blue due to pixels in bayer matrix ...). Here we can see full extent of M42 but also the Running Man above. Stars are also tighter in green because lens used is not well corrected for color and there is blue fringing otherwise.
  25. Ok, so here are steps to get basic stretch done in Gimp. I'm using Gimp 2.10.4-std windows install. - Open up your tiff image in Gimp - First step would be to use levels to do initial stretch so go to Color / Levels menu option and do following: Take top slider and pull it left until brightest parts of nebula become bright enough - don't turn them white - that will clip detail. In your image - 4 stars of trapezium are not resolved and are burned out already, but you can watch nebulosity around them - as soon as it starts being too bright - stop with adjusting levels. It happens somewhere around 50 out of 100 (or slider half way to the left). - next step is to move middle control in levels. Again go to same menu option - Color / Levels and this time adjust middle slider like this: This is part you need to do by feel - as there is no particular guideline on how much to pull it - effects of it will be seen in next step, so if things are not good in next step - undo and repeat this step but less aggressively (less to the left). - Third stretch with Color / Levels resets black point to it's regular value - again do the same Color / Levels but this time we move black point - or left most slider like this: Here, rules are simple - you need to pull black level marker up to the foot of histogram. If your background looks too grainy and bad - you overdid previous step, undo it and repeat with less aggression. It is ok if there is a bit of grain in the background - we can fix that with denoising later. - Next step would be slight curves stretch to bring out brightness of features. We do it via Color/Curves command, and we do following: We need two points - one "anchor" point that we will not move - put it somewhere on the base of histogram - just make sure you don't move it - line should still be diagonal. Next you take a point somewhere on the middle and drag it up until you get nice brightness of the features - don't over do it as again you run a risk of burning out core (bright regions). - Next step would be slight color correction - as image now has red cast due to LP. Simple way to do it is temperature correction: Move intended temperature slightly lower and original temperature slightly higher - observe what happens to the image and stop where you feel it is good. - next, let's do a little denoising. Select that layer and duplicate it: select top layer and run menu command: Filters/ G'mic-Qt and select repair / Smooth (wavelets). Increase Threshold until you get rather smooth image: after you need to add layer mask on this smoothed image - right click on that layer and add layer mask: Set mask to inverted and gray scale copy of layer: After that you can use that layer opacity to control how much noise reduction will be applied to final image (it will be more applied in dark areas due to layer mask - but with lowering of opacity you can have further control). And voila - save your image and present it:
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.