Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I think there is real OIII glow in that region. If you look NB images of that region - you will see that it is mostly dominated by OIII (but this is due to processing, OIII is usually fainter than Ha in "normal" light as there is much less O vs Ha).
  2. Mine is version 2.0.3 (I think that was the last released version)
  3. You need to help it a bit Select structured panorama option, and approximately align images your self.
  4. By the way Microsoft Ice does wonders:
  5. Definitively. Distortion grows as a function of distance to the center. This is actually normal thing - you can't map sphere to a plane without distortion. For small angles there is hardly any distortion - but when angles increase - distortion becomes apparent. If you keep only central region of your subs - then it will be minimal (this strictly depends on size of FOV - with large FOV you simply can't avoid distortion).
  6. Ok, so here is how it is done (but I'm not sure you'll like the result): Select Descriptor based registration plugin select your images. Now you want to adjust parameters. Here is problem with already stretched images (this big) as you get many stars - or rather many features (and noise) identified as maxima: You want to bring those to reasonable measure. Select maxima only, select sigma 1 and sigma 2 and threshold until you get reasonable amount of stars. More stars means better precision - but much longer computation time. Usually 100 or so stars is best compromise. These plugins don't work well with color data and transform color data into RGB stack (where first slice is R, second is G, third is B). After registration is done - you get 6 slices - first three are channels of first image, and second three are channels of second image. You now need to split those and manually combine each channel by using Image / Calculator and for example doing max method. Alternative is to do this with each channel separately (process first red, then green then blue and compose in the end). Above image is result. Now there are few things wrong with the image. First is - subs are already stretched and are not normalized. This can be seen as difference in intensity. This is why you should do it on linear data instead. Second is lens distortion. These images are very wide field and ImageJ stitching / registering can't handle lens distortion. If we look at 100% zoom of overlapped region - you will see that some stars are aligned properly - but when you start approaching edge of the field - lens distortion kicks in and stars can't be aligned properly any more. If you want to handle above case - you should really use Microsoft Ice for that. It handles already processed images and lens distortion.
  7. @Astro Noodles I just used descriptor based registration to register two crops of the same image (one was rotated) - with success. You can use this method to stitch your mosaic.
  8. Can you post those two images, and I'll stitch them for you and post how I did that. It is easier for me to work with actual data. By the way - it looks like above stitching plugins don't handle rotation, so that is not very good. Maybe plugins for registration should be used instead.
  9. Can you be more specific of what you mean by overlying? Does it create a stack with two slices - one having first image, and second slice - being second image? If so - what is the value of "missing" pixels in each image? Is it 0? If so - just do Image / Stacks / Z project and select sum. This will add all images in stack to form single image.
  10. @Astro Noodles Look at this page: https://imagej.net/plugins/image-stitching I know I've done mosaics before and it was rather easy to do, but I don't remember which exact plugin I used for that. Try out any of those listed on the page or do search for further plugins - like http://bigwww.epfl.ch/thevenaz/mosaicj/ @Elp Extract_Background.class Above is plugin that I wrote for background extraction. Apparently, I don't have source code for it any more so above is compiled version - you should put it plugins folder of ImageJ/Fiji installation. It expects 32bit float point image (single channel - do each channel separately). In principle - you should not use compiled code from untrusted sources, so it is up to you to decide if you want to use it. Here is how it works: It will open dialog with some basic parameters (which you can leave at default values - 99% of time these are good). You can uncheck "produce background map" if you don't need it - it is more debug feature to show what plugin determined to be background of the image. After you run it - it will create two additional images - gradient and background map (or not - depending on what you checked). Here is example performed on data from SGL (someone posted data for processing, can't remember which thread that was in). Now you have to manually subtract gradient map from original image by using Process / image calculator like this: (don't check new window to do it in place and check 32bit result). After this - you can close gradient and background map and run this procedure few more times on the image. In each iteration background will be better defined and gradient residual will be smaller (and even change direction): (hover cursor over bright and dark parts of gradient map to see actual value of gradient - it should be fairly small compared to pixel values in the image I often stop when I reach ~1.0E-7 or E-8 (for data in 0-1 range. It usually takes only 3-4 rounds of this). Do be careful when doing color images like this - this will make background have value of 0 and due to noise this means that some pixel values will actually be less then zero. When processing RGB images - I measure all three and make values all positive by adding small offset (you can measure each image and determine what is the least value to add to all three to make all pixel values positive).
  11. Yep - stitch while you are still linear but with any gradients removed. @Astro Noodles If you have any issues with gradient removal in ImageJ (or stitching for that matter) - let me know and I'll help if I can.
  12. How do you feel about quadruplet then? https://www.teleskop-express.de/shop/product_info.php/info/p14747_TS-Optics-62-mm-f-8-4-4-Element-Flatfield-Refractor-for-Observation-and-Photography.html
  13. That is one brilliant argument. Requesting permission to reuse it for those "one too many beers" social events?
  14. @AstroMuni Although telescope has spherical mirror - it does not mean you can't take images with it. You just need to work within its limitations. This means no high resolution work (or you can but image will look a bit out of focus). What you can do is make mosaics with it. If you reduce resolution to 1/3 of present - you will get rather sharp and good looking image - like this: Now for a moment imagine that this small image is just central part of large image - with larger fov resembling something like this: I'd say that no one could say that such image is lacking focus or anything like that. Trick is just to make 3x3 mosaic with your camera (and bin this data x3). It is a bit more involved but result will be much better.
  15. That is probably not going to happen as AstroMaster 130 has spherical mirror at F/5. What you are seeing is not focusing error but rather spherical aberration: https://www.celestron.com/blogs/knowledgebase/does-my-astromaster-130-have-a-spherical-or-parabolic-mirror-what-is-the-difference
  16. In that case - take a look at this scope: https://www.svbony.com/sv503-80ed-f7-doublet-telescope/#-F9359B It is decent F/7 ED doublet. Very affordable. Not sure how much more you'll have to pay for shipping and import duty.
  17. https://www.firstlightoptics.com/evostar/sky-watcher-evostar-90-660-az-pronto.html Do note that Skywatcher models will be more expensive due to announced 10-15% price increase in new batches. Current prices will last with current stock. Above scope is nice compromise in everything - price, aperture, F/ratio and hence color aberration, weight ... I personally only object it having 1.25" focuser instead of 2" (which 100mm models have). Have owned st102 and it is not all around scope. CA is just too large for any serious higher power viewing. It itself might not bother you - but it simply blurs high power detail. 100mm can show quite a bit detail on planets / moon - but not so with st102. Bresser ar102xs has similar amount of CA as st102. Although it has ED glass - it performs similarly to st102 because of faster F/4.6 ratio (which is hard on eyepieces). Here is nice comparison between two scopes (and third one for good measure): http://interferometrie.blogspot.com/2017/06/3-short-achromats-bresser-ar102xs.html 102/660 is the best due to it being slowest. 90/660 is faster than this still (at expense of some aperture). In any case - you can control amount of CA for high power viewing by using aperture mask. Here scope with largest focal length has an edge. 90/660 can be easily turned into F/10 instrument by using 66mm aperture mask (and it will have similar performance on planets as for example 70mm ED from SW).
  18. These are truly incredible images! How do you image with only manually tracked dob? Do you attempt to track or just wait for earth's rotation to move object thru the fov and take short videos?
  19. Yes, here is documentation for property: https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.processstartinfo.redirectstandardoutput?view=net-6.0
  20. Code for that is here: https://bitbucket.org/Isbeorn/nina/src/v1.10.1/NINA/Utility/ExternalCommand/ExternalCommandExecutor.cs Process process = new Process(); process.StartInfo.FileName = executableLocation; process.StartInfo.UseShellExecute = false; process.StartInfo.RedirectStandardOutput = true; process.StartInfo.RedirectStandardError = true; process.EnableRaisingEvents = true; process.OutputDataReceived += (object sender, DataReceivedEventArgs e) => { if (!string.IsNullOrWhiteSpace(e.Data)) { StatusUpdate(src, e.Data); Logger.Info($"STDOUT: {e.Data}"); } }; process.ErrorDataReceived += (object sender, DataReceivedEventArgs e) => { if (!string.IsNullOrWhiteSpace(e.Data)) { StatusUpdate(src, e.Data); Logger.Error($"STDERR: {e.Data}"); } }; It just captures STDOUT and STDERR and logs that with its internal logger. It's worth checking if there is log output window or log file somewhere for NINA
  21. Simplest way seems to be to alter .bat file in following way: whenever you want to write or "echo" something to standard output - you make redirect to a file instead and just watch that file in another window: In batch file instead of commands: echo "I'm doing something interesting here" change to echo "I'm doing something interesting here" > c:\output.txt Then in windows powershell use this: Get-Content c:\output.txt –Wait
  22. When calling another process - calling process has the control of I/O handles. It can decide what to do with standard output from the sub process. You can't force display on cmd prompt that is opened. There are two options from the top of my head: 1. Look in Nina documentation if there is a way to examine standard output of script in real time - like opening log window or similar. This is standard way to do things - either calling process discards output or stores it / shows it somehow (even cmd.exe is actually printing our output of other processes that just simply write to standard output handle) 2. Instead of printing to screen - consider some sort of independent log server where script will "print" or rather send log
  23. Bias as such is not needed in calibration workflow because it cancels out. Let me show you (and explain why that is). Workflow involving bias goes like this: You create master bias by stacking bias subs - let's call that MB You create master dark by stacking (D - MB) subs - or in words - you take each dark and remove bias by subtracting master bias from it and then stack those - lets call that MD You calibrate each light sub by 1) removing bias 2) removing dark 3) doing flat calibration (this step is not important for this so I won't bother with it) calibrated light = light - MB - MD Right? But let's just expand that expression a bit calibrated light = light - MB - average(D - MB) If you have a constant in average, you can pull that constant out as average(D-MB) is just (D1-MB) + (D2-MB) + (D3-MB) + .... + (DN-MB) all divided with N or number of subs. If we rearrange brackets we end up with D1+D2+D3+ ... + DN - MB -MB -MB (this repeats N times) all divided with N That is further (D1+D2+...+DN - N * MB)/N And from this you can see that you can pull MB out of that as N/N is 1 so above expression calibrated light = light - MB - average(D - MB) transforms into calibrated light = light - MB - (average(D) - MB) = light - MB + MB - average(D) = light - average(D) There we go calibrated light = light - average(D) No need to use bias to get calibrated light. Why is bias then used in some cases? Two reasons really. Sometimes bias is used instead of flat dark. This is strictly speaking wrong - but in many cases, flat exposure is short and dark current is low - it can work, or rather error will be too small to notice. Second reason is if you plan to scale darks or use dark optimization (which is really the same thing except computer trying to figure out proper scale factor). Sometimes you build dark library of say 10 minute exposures, but for some reason use 5 minute exposures when shooting target - you can still calibrate your light subs with mismatched darks if you scale them. Dark current signal builds up linearly with time - which means that in 10 minutes it will be twice as strong as in 5 minute exposure, or 5 times as strong as in 2 minute dark exposure. We can exploit this fact if our camera behaves properly (CCDs tend to behave good for this and CMOS cameras behave poorly - so I would say that this is mostly available for CCDs). Dark subs contain dark current signal and bias signal. If we remove bias signal we can then multiply dark current signal (bias does not depend on time so we must not multiply it - we must first subtract it). This is the reason why there is bias subtraction in above expression - but it is only needed if you plan to scale your dark current. If you plan to scale dark current - you can no longer take bias in front of the brackets and it won't cancel out. It is needed for whole thing, Makes sense?
  24. I know, I'm just messing around. Or rather - it is very interesting to imagine data that would really be impossible
  25. When saying received "impossible data" - most people will interpret that as "it is impossible for such data to be received" rather than data being impossible, but there is such thing as impossible data For example - having data that you have to look at twice in order to see it once Or data recording consisting out of values so large that they can't be recorded possibilities are endless for "impossible data"
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.