Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Here is my suggestion based on what you said. Get this scope: https://www.teleskop-express.de/shop/product_info.php/info/p3881_TS-Optics-PHOTOLINE-80mm-f-6-FPL53-Triplet-APO---2-5--RAP-Focuser.html (I've just seen it is quite a long wait time on it - 80 days, so consider alternative ) https://www.altairastro.com/altair-wave-series-80mm-f6-super-ed-triplet-apo-2019-457-p.asp This is in essence the same scope only branded differently. Get EQ5 mount, or better one if you can afford it. HEQ5/EQ6 is going to be rather heavy mount to carry around so do think about that. If you want lighter mount that is good - take a look at iOptron offerings. You think you can live without guiding for now - but in reality you can't so don't be afraid of it - sooner or later, you'll want to guide and I would say - get right into it.
  2. Yeah, sorry about that - brain doing funny things - saw 180, said to myself 3 (minutes / exposure) - jumped to 3h total for some reason
  3. A few findings and result. First, most stars have more than 3.2px FWHM - which means you should really bin x2 (in software, yes, CMOS sensors should really be binned in software not at capture time as it allows more flexibility and is in principle the same thing unlike hardware binning of CCD sensors). Second - you pushed data too much. Only bright part of this image is Cave nebula itself - surrounding Ha region is very faint. You can bring it out a bit but it needs very delicate denoising to be shown with 3 hours of exposure that you made. Here is my result: Stars are a little bit bloated from excessive stretch and yes, using Starnet++ to remove them would be good approach, you just need to be careful when blending them back in to make them look natural.
  4. There is a bit of "everything" here. You are probably oversampling a bit at 1.54"/px with 70mm scope. Image looks like sharpened and star mask of sorts was used during processing. Sharpening will emphasize the noise and how much you stretched - will certainly have impact. You should learn to stretch your data only to a certain point - only as much as it will let you. Once you start forcing it - it will show. However, I think that you could do some nice noise control with this image. I doubt that extending sub length will have major impact on resulting SNR. It will have some, but I doubt it will show in the image. I believe it is much more important how you process the image and possibly how you work with your data. Is this image a crop or resize of your capture? If it is resize - why not use binning instead if you are ok with having smaller image? That will deal with any possible oversampling and will improve SNR. Would you mind posting your stacked linear data, I would like to have a go at it to see what I can pull from it.
  5. Do you mind me being slightly harsh on your image? It's not actually the image, but rather processing. I think you obliterated the image . Again, I'm sorry to be so harsh, but seriously, I think you went too much with the processing. It looks more like painting than photograph. Let me show you what I mean. If I were to show this to someone and ask them, what do you think is in this crop, how many people would respond correctly and say NGC206, or at least - yes, that seems to be some sort of star cluster? I think that above crop can easily pass as planet surface or similar. Here it is on another image - same region If anything, in second image one can clearly see that scattered dots are in fact stars.
  6. Sure, I've only guided with color CMOS cameras so far and never had any issues (one of which had the same sensor as ASI120 but was QHY model). Color cameras are just a bit less sensitive than mono, but it really makes no difference to guiding. I'm using one with OAG at 1600mm of FL and it still works and I can always find suitable guide star.
  7. Camera will come with 1.25" nose piece and you insert it into barlow like normal eyepiece and then you put barlow into focuser (again 1.25" connection). Ideally you want something like x2 to x3 barlow for your scope and this camera.
  8. Hi and welcome to SGL, I second the Skywatcher SkyMax 127. This particular version is interesting: https://www.firstlightoptics.com/sky-watcher-az-gti-wifi/sky-watcher-skymax-127-az-gti.html as it comes with mount head to track objects which you control via your smartphone. Only problem being the stock at the moment - most of items are out of stock due current circumstances so you'll have to spend some time finding retailer that has them in stock or to backorder and wait a bit.
  9. https://pixinsight.com/doc/tools/Resample/Resample.html Don't use integer resample - that is for software binning. Here is an example for you (done in ImageJ): vs First one is just "zoomed in" at 300% image that is close to being under sampled (nearest neighbor - pixels are visible) . Below is exactly the same image but up scaled 300% using proper scaling method - no more pixels.
  10. That is on 300% zoom in PI I presume? You'll see "blocky" stars in such display because it uses only nearest neighbor interpolation when zooming (so you can see individual pixels). Take same undrizzled image and scale it to be the same size as drizzled image with more sophisticated algorithm like Lanczos or Cubic B-Spline or similar and then compare the results.
  11. I'd say, probably this one: https://www.firstlightoptics.com/beginner-telescopes/sky-watcher-mercury-707-az-telescope.html It can actually show you very nice images of Jupiter and Saturn. Moon will be very nice in that scope (as will in almost any scope), and the scope is pretty much plug and play. It is also rather easy to aim, and looks like a proper scope - maybe a small bonus to help keep the interest. For the most part, the most interesting objects to see will be just - stars. Different intensity stars, some grouped into clusters, having nice colors. Refractors tend to render stars as pinpoints of light and this is aesthetically pleasing. Although above scope is not going to be optical wonder - it will be surprisingly good for that sort of money. This can help spark the interest as cruising star fields can often lead to a "discovery" - object barely noticeable but definitively there, what could that be? Further research will be needed ...
  12. This is for lens and not scope? It does not look quite like over correction to me. Well - it does for vignetting but not for dust. In fact, I can't seem to find dust on any of the two subs. Sub without flat correction does not seem to suffer from vignetting. Is there any chance that you might have changed aperture settings on the lens between lights and flats (by accident or on purpose)?.
  13. Fact that you are at 6"/pixel does not mean you are under sampled. This would probably be true for diffraction limited scope but it won't be true for lens - and I suppose you are referring to Samyang 135mm F/2 lens here? Most lenses are not diffraction limited (even if they are deemed very sharp by photographic standards). Not sure if you've seen these two graphs - they are from Samyang website. Look at 30 lines per mm (grey lines - around 90% MTF) . This is effectively proper sampling for pixel size of 16.666µm. You have pixel size of 3.76µm - which is x4.4 smaller. I recently measured sharpness of Samyang 85mm F/1.4 using artificial star at center of the field and F/2.8 aperture and got results that say that at 4.8µm pixel size I'm still oversampling by almost factor of two.
  14. Depends on read noise of your camera and it's relation to other noise sources. Read noise is only type of noise that adds per exposure and is not time dependent. All other noise sources are time dependent and add like they grow in time. If camera was with 0 noise had 0e read noise - it would be no difference between 200 x 1s and 1 x 200s. However, when you have read noise, 200 x 1s actually has 200 "doses" of read noise and 1 x 200s has only one "dose" of read noise. Noises add in quadrature (like Pythagoras' theorem). If you have sides that are vastly different in size - resulting hypotenuse will be almost as long as longer side: This means that if read noise is much smaller in magnitude than any other noise - resulting sum will be almost the same as that bigger component and read noise will not make much difference. In that case 200 x 1s will be almost the same as 1 x 200s. We can't distinguish by eye noise that is dozen or so percent higher - they look the same to us.
  15. Well, if it says that magnification is unchanged, then it really should not change focal length. What version of CC do you have? Again, that is strange. If it shows resolution of 4272x2848 - that should be actual resolution. Maybe try FitsWork to open raw file (https://www.fitswork.de/software/softw_en.php). I used it to convert CR2 to fits. Once you convert to fits, then you can do your own astrometric measurement (although crude) - in ImageJ or other software that can measure pixel distance between two points. You identify two bright stars and measure pixel distance between them, then use Stellarium and angle measure tool to measure angular distance between those two stars. Divide the two and you'll have arc seconds per pixel. Same number (or very close) should be derived from focal length and pixel size.
  16. This is rather simple. First - your scope is 1000mm of FL. There is sample to sample variation in focal length, but that is order of few mm. So your scope can be 1003mm or 998mm depending on how the mirror was figured, but overall it is ~1000mm of FL. If you are using SW coma corrector like this one: https://www.firstlightoptics.com/coma-correctors/skywatcher-coma-corrector.html Then you'll notice that it is x0.9 coma corrector - which means that it reduces focal length by factor of x0.9. Actual reduction factor will depend on distance to sensor so if you vary distance to sensor by 1mm - reduction factor may change for few percent. In any case ~ 1000 * ~ x0.9 = ~ 900mm. 910mm is "correct" result for such combination. If Astrometry.net solved your image for 10.4µm pixel size - this simply means that uploaded jpeg straight from camera is set on lower resolution than max resolution. Here are possible settings for 450d: You probably had it on Small / fine or normal. That is roughly the half of original resolution. In fact, when you account for two, you should get pretty good match. Take raw image from camera and run it on astrometry.net - it should give you best indication of working resolution. Your camera has 5.2µm pixel size 22200 / 4272 = ~5.196623 (sensor has 22.2mm width).
  17. It is really not hard, but there is no specific software to do it simply. Most DSLR cameras have inbuilt color correction profiles (WB presets) and you could probably find those online for your camera, but they will be for unmodded version. For modded version, you need to calculate it yourself by shooting color charts / passports and then measuring things. Here is an example I did. I took color passport image and put it on my phone (phone is not color calibrated and in order for this to work it really should be, but I did it more as an exercise): This is original image I put on my phone: And here is what raw linear data looks like: Not properly color balanced (white is not white) and not color corrected. After white balancing and gamma 2.2 being applied it looks more like regular color (but not quite, red is not saturated enough and colors in general look "washed out"): Finally, after applying color correction matrix, this is resulting image: (and color template next to it for comparison). There are still some differences - red is now too saturated and teal is not quite good, but overall this is the best result and colors are the closest to original template.
  18. Thing is that most people boost color too much in images. Your image is actually properly color balanced. I measured on a 0.48 color index star (that should be closes to white) and it matches. If camera is modded this means that color transform matrix will be a bit off and you'll get stranger colors. Ideally you want to do your color correction manually (shooting color charts and then deriving color transform matrix). Most of the times you'll see M31 processed something like this: But in reality, true color of that galaxy is more like this: Your camera does have a color cast due to modding that needs to be addressed, but I believe second one is very close to true color of the galaxy. Just as a reference, look at these two images: and this: First is Hubble team rendition of part of M31, and second is color for different spectral types under D65 illuminant equivalent (sRGB used on internet uses D65 white point). As you see, last two images match almost perfectly, but you rarely find M31 that is rendered like that. Your image is close but there is a "brownish" cast probably due to modding.
  19. I have TS 80mm photoline F/6 APO that might be sourced from the same factory as WO GT 81. With SW, I would check following: - what sort of flattener / reducer can be used with it - how good is focuser and does it have threaded connection. F/6 scope can be reduced F/4.8 without too much trouble with good FF/FR. However, in order to give best performance - there should be no tilt and threaded connection is the way to go about that one.
  20. Try to take one set of subs (whole set) and stack them without alignment. Use average stacking method. Although stars don't move on successive subs - they might move over couple of frames. I had that - perfect guiding over one frame, but difference between first and last frame over few hours could be as much as 15-20px.
  21. If you have mechanical drive - maybe think of changing it for new one. It might be nothing but sometimes when drive is about to fail - these sort of things start happening - wrong values written and checksums turning bad and similar.
  22. That only works for vignetting with parfocal filters. People that do that probably keep their filters and other bits in optical train very clean (sealed filter wheels and screwed in optical train that does not get disassembled often).
  23. It is about "endianness" and what is expected. Endianness of numeric format on particular platform has to do with order of multi byte numeric formats. For example, let's say you have 32 bit integer. When written down, such number can be thought of 4 consecutive bytes (each byte has 8 bits and 32bits is formed by 4 groups of 8 bits). Order in which those bytes are written is endianness. There are two different ways to write it - little endian and big endian. One is from high to low and other is reverse - from low to high. You can sort of think of it like this - you are writing a number one digit at a time. You can write it - left to right or right to left, so number one hundred can be written as 001 or 100 depending on direction. Different processor manufacturers decided to hold numeric values in memory in different order and programmers did what is the easiest - when writing numbers from memory to a file - just write in the same order it was stored in memory. This lead to problems when trying to share data files between systems based on processor architectures with different endianness - as numbers were read and written differently - like in above example - if you read 001 the way we normally read it - you'll read one and not one hundred. For purpose of communications and file formats - it is important to define if numbers are stored in big endian or little endian format. For example internet uses big endian format (networking in general and it is often called networking order). Here is what SER format specification states on endianness of data: I think that above error has to do with the fact that expected value was not written in proper format (little endian specified in file header but value actually written as big endian or something like that).
  24. I would try two different things here. First is to figure out if there is drift and what sort of drift it is (is it straight/linear - bad or random - sort of good). To do that, just stack subs without alignment and look at formed star trails. Second is to try to use different debayering method. Use super pixel mode and see if that changes anything. What software are you using for stacking? Changing registration alignment method might help as well. Don't use linear if you can help it. Use more advanced resampling method.
  25. That framing thing was my attempt at constructive criticism Everything else with the image is fine, so I just pointed that one as to give some feedback
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.