Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

ONIKKINEN

Members
  • Posts

    2,418
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by ONIKKINEN

  1. Care to elaborate how this would work on a non-pro windows version? Using a mini-PC myself with windows 10 Pro and last i checked pro was still necessary. Mine is not the best in terms of performance so will likely upgrade one day and would like to know if pro really is needed or not.
  2. Nice details on galaxies so small. You often go for these smaller ones and seem to pull of a great deal of detail, do you know what sort of fwhm seeing you typically experience when imaging these?
  3. Not heard of the Atik base before, but looks like the kind of thing meant to compete with the Asiair - a small scopeside computer. I see it mentioned in one article that it is based on the Stellarmate, which would be another option. There are also raspberry pi based options, or a windows mini-PC. Personally i think the windows mini-PC has most to offer since you can run a full version of any normal desktop windows application on it, so really there is no chance that it could not be made to work with any combination of kit.
  4. In PixInsight you could combine the images with the NBRGB combination tool (stack both as separate images first and extract RGB from the duo narrowband to use as Ha/O3). The results vary and it takes some effort to try with different settings but i think this might be the easiest way. I would use the stars from the LP filter image and starless mixed from both the LP and duoband one with the NBRGB tool. Probably not very scientific, but should work to make a nice looking combined image. Alternatively you could try combining with pixel math using the max operator, but this has potential to go wrong if the 2 datasets vary greatly in SNR. Takes more work to get a nice result IMO. Third option would be combining in Photoshop with the lighten blend mode to get the best parts of the narrowband image added to where you want.
  5. Have cleaned my mirror twice now, the first time because there was some oily looking rainbow coloured residue on it (not sure where from, possibly from my car exhaust as the scope was nearby) and the second because i had my mirror out of the scope for other reasons. The second cleaning wasnt really necessary, but might as well since the mirror was already out. Generally speaking i think its best to not clean it unless there is something on the mirror that could damage the coating and if you're unsure then id say dont clean it.
  6. If you have a very busy image like this one you shouldn't use the automatic grid thing since it will definitely place some samplers where you wouldn't want to put them. You will want to manually place some samplers on the least worst places around the image. Realistically all of the samplers in this case will be on nebulosity since all of the image is nebulosity, so some will probably be removed. Experiment with the smoothing function and see what works best. As an example i might place samplers like below (although it would be easier to tell the best positions from raw data): You might not need this many samplers either. Really it takes a bit of trial and error to get the best out of.
  7. Thing is, very small amounts of tilt like shown here are very difficult to iron out. You would first have to figure out which part of the system the tilt comes from - The lens cell, the focuser, the adapters, or the sensor. Very likely a mix of many or all of those, and its not straight forward at all to figure out which of those is the issue. Very easy to go on a wild goose chase and introduce more issues. There was someone here who built a laser alignment-thingy-doodad-device that determined tilt on a sensor, but cant find the thread now since the forum has some search function issues. The thread was most likely in the DIY section so some manual searching would probably uncover it. Trying different camera orientations will also give more clues on where the tilt originates from. If you rotate the imaging train and the tilt stays on the right side, then it is somewhere in the imaging train, if it moves corners then the tilt probably originates from before the imaging train. All a bit of guesswork to be honest, i would be very pleased with what you already have although i am used to some really cursed stars from a newtonian.
  8. Your tilt measures 10%, which is basically nothing at all. Off axis aberration is also around 10% which is very good. My advice would be to leave it as it is and enjoy a working telescope and not try and mix something that is not clearly broken. The right side has barely perceptible tilt that blurXterminator would definitely get rid of completely. Alternatively binning or resampling a little bit would probably hide that too.
  9. Yes it should scale the histogram correctly now. You should be looking at a roughly 8000 median value for a mid level exposure at 14-bit.
  10. NINA is displaying a 16-bit histogram here for some reason, whereas your image is presumably taken with a 14-bit camera. The max value in a 14-bit camera is 16384, or 16383 if we include 0 as one of the values so fully saturated. You need to go to the Options->Equipment-> and under the Camera tab set your Bit depth to the correct value which would be 14. This should make the flats wizard work too if there was a problem with determining a nice exposure because of the incorrect assumed bit depth of the camera.
  11. I reached the same conclusion as @AstroMuni above, calibration with this set of frames works as it should. Below a false colour rendering at an extreme stretch: As we can see the image is very flat, apart from the sky gradient but this is unrelated to the issue. No ring artifact to be seen. What you describe with the focuser tensioning and such is exactly what makes the ring thing happen, because at one camera orientation gravity will act on the imaging train differently than in another. You could test the stability of your whole system though, and its fairly easy to do so. To do that, take a flat with the telescope horizontal to the east, take another with the scope horizontal to the west, then one to the zenith one to the south and north etc. Point is to get a flat in as many imaging train orientations as possible here. Then do a simple pixelmath division between any of the 2 flats (flat east / flat west for example), what you should get is a uniform gray mess with no clear gradient or any features to be seen. If you see a gradient or any features other than noise, you have a mechanical problem. This method reveals the smallest issues with your mechanical problems in the scope, give it a try?
  12. The initial thought i had in mind is a flats calibration issue, since this kind of ring artifact happens when your light frames and flat frames do not mechanically match (because for example focuser tilt/sag, unreliable collimation, image train stability issues between the lights and the flats). You did mention you have thought of calibration issues but this is still my guess. I used to have an unreliable focuser and an unreliably thin aluminum tube in my newtonian, and most of the time flats had this kind of ring artifact which went away after making sure everything is mechanically sound. Any ideas that come to mind are just wild guesses without seeing the raw data though. Can you post a raw light, a matching dark frame, darkflat frame (or bias, whichever you use) and a matching flat frame in raw format straight of the camera? You can attach .fits (or any other) files here on SGL just by dragging and dropping them below where you write a comment so no need to link something to an external file hosting site.
  13. Phenomenal first image and actually not just as a first image - the image is good in general, really good job. Learning all of PixInsight and the various xT suites for a first image is very impressive.
  14. Wow! That's a lot of dust. Top shelf processing too, really a winner image right here.
  15. I think you are being a bit unfair here. There is nothing wrong with sharpcap itself and just advicing to "take some images with it" is just plain silly when OP has a genuine worry here. I would imagine manufacturers also measure camera performance using exposures and measuring the pixel values within (like sharpcap does). These technical specs cant be seen in a single sub frame, at least easily so i really dont think looking at a subframe has any meaningful value here. But the small technical differences can be measured and they can have a real effect in integrations you would not have been able to otherwise see. Im sure you know all this... Definitely not a panic over nothing. However @zernikepolynomial if you look at your graph and the published graph they do look similar in many parts. Sharpcap has skipped gain 125 and gone directly from 100 to 150 which is a problem if you wanted to know what the stats are for gain 125 but im not sure there is a way to force sharpcap to use specific gain values for the graph. Both 100 and 150 look like fairly accurate measurements compared to the published values so i think it may be possible the read noise drop actually does happen at 125 but its just not shown. The actual amount of read noise is slightly higher, which i think is a bit concerning but it doesn't look too bad. The "low read noise" modes are something i dont really trust myself. My Rising Cam branded IMX571 camera has a low read noise mode, but there is no documentation on what this actually means. I do notice that the framerate drops exactly by half when this is engaged, leading me to think that this is some sort of double exposure internal dark calibration thing, which would not be useful at all for long exposure imaging where this probably gets disengaged. This would not be apparent with sharpcap measurements since the exposures used are very short hence my distrust for it. If we assume your camera is not using the "LRN" mode (whatever that is), then at gain 150 you only have a discrepancy of 1.49 to 1.417 which is not too much. Still a bummer that its more than it should.
  16. The histogram 1/3rd of the way rule is just a guideline, but it doesn't actually mean anything and cant be used to gauge whether an exposure is good enough if you want to be pedantic about it. After all you could double your ISO and get a much brighter histogram which surely is much better? Well, not necessarily and often DSLRs have a point of diminishing returns at some ISO after which read noise no longer decreases linearly (not sure about the 400D, some searching suggests ISO 400). If you really wanted to do the math you would need to know the electrons to ADUs conversion ratio at the chosen ISO value, the amount of read noise in electrons and some other technical gibberish. These values are not given by DSLR manufacturers so there wont be an easy way to do all this, hence the common tip of histogram 1/3rd of the way through as an indication of a good enough exposure (its not, but better than nothing or taking 1s frames). For what its worth the 60s exposure you have linked here looks ok to my eyes. Maybe a little bit on the dark and noisy side, you could try exposing for 2 or even 3 minutes if your guiding can handle the extra length. If not, take a few hundred subs like this and there should be a nice M27 in the end. Not familiar with APT, but if its just a visualization thing then it doesn't matter. If it actually applies some kind of stretch to the raw data and saves it modified then leave it off.
  17. Sure has been a weirdly clear and warm summer so far. Lets hope it continues well into September once the dark skies return. Probably not, but cant complain at the moment.
  18. I drive around 45-60min to a bortle 4 location when i go imaging so often there is no choice to let my 8" scope cool down properly. I help the process along by driving there with the heater off so that the scope cools down by the time i arrive and i lose as little time as possible to settling tube currents. Really makes me question the sanity of the whole hobby when its -10 or colder for the drive there... And often the first 30-60min is troubled with thermals anyway. Running a primary mirror fan helps a lot though.
  19. Actually i mixed up something too, i thought you wrote refractor, but it was a reflector . Anyway with a newtonian your coma corrector will most likely be the weakest link in terms of what kind of resolution you get out of the thing (coupled with seeing of course, but this you cant help really).
  20. You can bin the 183MM images to get a more reasonable sampling, either x2 or x3. Hardly a point in changing a camera especially since the 174MM has a smaller sensor than your current one. And by the way, its not really possible to just say that one is oversampled and one is undersampled without knowing the aperture of the scope, the quality of the used flattener/corrector, the typical seeing conditions and so on. But for almost all imagers out there a sampling of 0.75" per pixel is severely oversampled (you got over and undersampling mixed up here, a small arcsec/pix value is typically oversampled not undersampled). Bin x2 with the 183MM might be ideal, or very close to ideal depending on what exact scope the 650mm fl refractor is and what kind of seeing you typically get. If not, bin x3 to 2.25'' is still very good and would consider it "high resolution" if the data supports this resolution.
  21. Here is an astrophotography example of a severely undersampled image: And then a closeup at 500% zoom: The smallest stars, the ones that are actually only one pixel look somewhat blocky at 500% zoom but none of this was visible at 100% because they appear like point sources. None of the more than a single pixel sized stars appear blocky. In short, dont worry about undersampling, which may not even occur.
  22. First, welcome to SGL! With my 550D and an f/4.4 newtonian coma corrected to f/4.2 i used to see the same thing, a noticeable dark band on one of the flat sides of the sensor. Used to think its the mirror casting a shadow as it rises out of the way, or shutter shadow because of the mechanical shutter. But if you look closely at your sensor you see that the sensor is very boxed in deep in to the sensor recess. The camera body/mirror mechanism at one side of the sensor is close enough that it could cast a shadow (like the part circled below). In that case there is nothing to do about it, except of course keep taking flats and accept the lower SNR on one side. Usually you can anticipate this and compose your shot so that there is nothing of interest on that side. Other than that, you probably have a normal sky gradient in the image which could be confused to something originating from the camera. It could be a coincidence that the sky had a gradient that matches the sensor orientation in this case leading you on a wild goose chase.
  23. I ran a very quick process (in Siril) to the stack: Looks like star colours are alive and well, so at least its not a capture issue where stars were saturated to begin with. So its all fixable it seems. Stars are bi-colour because of the filter passing only the blue-green O3 and the very deep red Ha. Personally for this reason i would work on the stars-only layer in Photoshop and manually fiddle with the levels until the starcolours are the most natural to my eyes, but this one isn't too far off either so not strictly necessary. What i did: Bin x3 (average), Slight crop, background extraction (crude, could have done a better job), Photometric colour calibration followed by a manual white reference on that bright-ish white looking star in the top left corner just because i thought it might take the edge out of the very red stars. Stretched first with an Asinh stretch at 300 power and then a simple histogram transformation. Where you probably took a wrong turn would be the hyperbolic stretches, especially if you ran it on the stars-only image. I recommend first processing the image to the point where stars look good to your eyes and only then separating them with starnet. Then keep working on the starless layer until you like what you see and re-introduce the stars. This way the star layer needs minimal work because they were good before you separated them. I usually do all this in Photoshop because its much easier to see what is going on with the layer system. But typically the stars only layer sees only minimal work with maybe some saturation or levels adjustments, but definitely no hyperbolic stretching at that point.
  24. Would you be willing to share the raw stacked file, or just a single subexposure? It would be easier to give advice with the raw data at hand. But anyway, how are you stretching the data here? You shouldn't drop the white point below the point of star saturation, and it looks like that may have happened considering that every star here is just white and colourless. Its also easy to destroy star cores if you use one of the more advanced hyperbolic stretch tools and place some of the sliders somewhere other than the best spot, personally i dont like using them too much. If you ran deconvolution you may have also destroyed the stars if it was ran too aggressively. The image looks fairly noisy so i would recommend not running that in this case.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.