Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I've posted related question about INDIGO just few seconds ago in software section as well - this is sort of follow up, but a bit separate. INDIGO has ALPACA agent - which means that in theory I could connect from windows machine via ASCOM to it and use all hardware connected to my INDIGO Sky setup (rpi4 running INDIGO server). I haven't looked at ASCOM 6.5 yet, I did not know it was even released. Anyone using it? Is it different than regular ASCOM 6.3 as far as local functioning? Will software made for 6.3 work on new 6.5 version?
  2. I finally had time to do some housekeeping and preparation on my imaging gear and I upgraded INDIGOSKY instance that I have on RPI4 and also underlying Raspbian. Then I noticed that there is now INDIGO Imager available and it seems like very nice remote solution for my use case. I'll leave imaging rig outside in the yard (AzGTI + ASI178mmc + Samyang 85mm + guider setup + RPI4) and "operate" the rig using INDIGO Imager from the comfort of my study (wifi connection). I've already set up much of this, but since I still haven't used this combo - I wonder how well / glitch free will it work? It seems to work OK in "in-house" testing conditions - meaning I can slew the mount and so on, but I haven't tried long exposure / plate solving and all other bits - since, well, I don't have night sky in my study (that would be nice ). Anyone using this or similar combo? All info is welcome.
  3. I don't think that will help much. You are always having one more unknown then number of equations. In two images - you'll have 3 unknowns - Background signal, First LP gradient, Second LP gradient. Whenever you add another image - you add another unknown - LP gradient in that image. That won't help either, I'm afraid. LP strength and distribution will change - because in one instance you have whole atmosphere that scatters light and in second instance - you have cloud at some distance and at some angle. You'll simply get different gradient pattern of different intensity. However - your ideas have merit and I've identified one case where such reasoning can actually be exploited. Imagine that you are imaging object of unknown extent - but it is extended nebulosity and you don't have idea if it will fit FOV (or you know it wont - you are imaging interesting part in larger nebula). Whenever you have signal on whole image - it is very hard to do background removal. It is much easier in cases where you have say galaxy or cluster that is in center of the FOV and rest is just sparse star field. Here in lies the trick to do "accurate" gradient removal. Say you have this case: At some point in the session, where you have the least LP, but best around meridian (around the time of meridian flip) - you take one frame that you mark as "reference" for this purpose. At the beginning of the frame - you record Alt and Az of the object (not RA and Dec - but Alt/Az). After you've done that - you wait a bit - enough so that target drifts out of view, but keeping the scope pointed at the same Alt/Az coordinates. Then you take another image - like this: Then you continue imaging as normal. Premise is that: a) - in same part of the atmosphere (not celestial sphere) - gradient will be the same - hence Alt Az coordinates b) - LP intensity did not significantly change in course of one exposure. LP changes over the course of the night - people turn on/off lights, but here we hope that it did not change significantly in course of few minutes - that is reasonable. You now have one frame that you can do standard background extraction (note extraction rather than removal) - which we can then use to subtract from the target itself on reference frame. Then we can normalize all other subs against that reference frame with subtracted background. Method is of course as accurate as your background removal on sparse star field, and it depends if there is sparse star field near the target.
  4. Venus does not have any appreciable detail in visible part of spectrum - it just looks like "milky" marble - and only thing you can see is phase. I only ever did one image of Venus: That is about all you can get out of it unless you use UV filters. In UV filters you can distinguish cloud formations. For example - see this work: https://www.cloudynights.com/topic/626565-venus-in-uv-light-on-july-1-7-and-19/
  5. Oh, I see. Only thing that comes to mind is "parfocalising ring" for 2". Something like this: https://www.firstlightoptics.com/adapters/astro-essentials-parfocal-rings.html
  6. How about using this: https://www.firstlightoptics.com/adapters/astro-essentials-2-35mm-focus-extension-tube.html
  7. About as large as 8" F/6 newtonian. Probably as twice as heavy and about 20-30 times more expensive?
  8. There is simply no way to tell what is background and what is legit signal. Best you can do is model background somehow and remove it based on that model. I'll name simplest two models of background and then they can be used: - constant background - linear gradient For first - you don't need any special tool - setting black level will remove it. This is best kind of background, but it only happens in pristine skies with small field of view. Second kind - linear gradient is best removed in linear stage. You simply model background as some sort of linear gradient. You can manually choose slope and direction of gradient - or with help of some reference points. This kind of gradient removal works good in moderate to strong LP - if your FOV is small and LP gradient is relatively linear. Not sure how choice of stacking algorithm impacts linearity of gradient in the final image though. I specially developed normalization method that will equalize linear part of gradient between subs - and hence stack will have same linear gradient that can be removed easily. In any case - if you want to successfully remove background - best to do it in linear stage. If you do non linear stretch - even linear gradient will become non linear one. As soon as you have non linear gradients - things will start to get trickier. Most features that are not background are not linear and are some form of function, but every function can be approximated more and more precisely with higher order polynomial. If you start modeling background as higher and higher order polynomial - there is bigger chance that you'll flatten some of legit signal as well. Even linear background approximation has specific cases where it does not work - or requires special handling. Say you have nebulosity in left part of the frame - but not in right part of the frame. Using linear background removal with reference points on whole image - will "tilt" that image - make nebulosity darker and other side of image - which should be dark - lighter. For that reason - background removal needs help determining what part of the image are in fact background. In above case - we would select only one side of image and mark that as background. I've created some automatic process that does this - not sure how accurate is, but it's giving me good results most of the time. If you want to see what it will produce from your image - just attach linear fits of that image and I'll do my background removal routine on it.
  9. I agree with going without focal reducer, but does that really mean more expensive eyepieces? Some of cheapest 2" long focal length EPs - actually work better at F/10 then at F/6.3. We often go for wider field shorter FL eyepiece to obtain smaller exit pupil and darken the background - but that can be achieved with longer FL EPs in F/10 configuration. If I want to maximize FOV and get say 3mm exit pupil on my F/6 dob - I need very expensive EP - I need around 18mm FL and that would be say - ES92 17mm or perhaps 100° eyepieces in 18-20mm range. These don't come cheap and need to have good correction. On the other hand - with F/10 scope I can get the same for say 100euro in form of APM 30mm 80° or maybe 180ish Omegon Oberon 32mm 82° EP.
  10. Altair Astro has FPL-51 version and by the looks of it - it is more affordable than SVBONY model - at least for UK buyers (not sure how VAT is handled - but AA version has no import duties and exVAT price is better than SVBONY): https://www.altairastro.com/starwave-ascent-102ed-f7-refractor-telescope-geared-focuser-468-p.asp Only unknown is availability.
  11. Yes, one of important aspects of using reducer that is also corrector is getting the distance right. For example - SCT x0.67 reducer has 85mm working distance. Although it says +/-15%, if you don't get that right - level of reduction changes and so does correction. This means that pretty much all 2" diagonals are out of the picture as most have 100mm+ optical path. Longer working distance means higher reduction factor. Tech sheet also says 41mm clear aperture - so even if SCT is not vignetting - reducer will, and also only 24mm of corrected image field.
  12. I guess it is down to illumination / diameter of illuminated circle. C9.25 is in some sense comparable with stock 8" Newtonian I often wondered how I would feel owning long focal length telescope and if I would get "boxed in" feeling. With stock 8" newtonian you get 25mm 50°ish eyepiece - let's say that it is similar to 25mm plossl and hence has about 22mm field stop. 8" Newtonian in F/6 variety has 1200mm of focal length. C9.25 has almost double that at 2350mm. If it can illuminate 44mm circle - then all you need is 50mm plossl eyepiece and you'll get almost the same field of view (exit pupils will be a bit different - but that can be sorted if one wishes by using different AFOV EP). According to this: https://groups.google.com/g/sci.astro.amateur/c/QfLsV_roCQY C9.25 has baffle wide enough to let 47mm of light thru - so it will fully illuminate complete field stop of 2" Eyepieces. I think it would be better to use it without reducer unless reducer is also corrector. SCTs suffer from coma. According to: https://www.telescope-optics.net/SCT_off_axis_aberrations.htm That is not that bad. I'm never bothered by coma in my F/6 8" newtonian, but I don't have EPs with very large field stop so I can't be sure. In the end - reducer will "compress" things down into smaller circle. If C9.25 can illuminate 47mm - adding x0.67 reducer will turn that into ~32mm illuminated circle and using EPs with larger field stop will lead to vignetting. We need about 10% drop off in illumination to start noticing vignetting.
  13. That depends on your bayer matrix order. It is most likely RGGB - but it is easy enough to test it - just shoot something red. Green channels will have equal values - red will be higher than blue.
  14. I have no idea I don't use PI, nor did I help make it so not really in position to answer that. I do however have firm understanding on numeric formats and why they should and should not be used.
  15. Absolutely no advantage in using 64bit float point format over 32bit float point format for astro images - only drawbacks - slower computation, more memory usage and so on.
  16. Small update again: - it will not work out of the box, SynScan app is simply not configured to accept keyboard input in such way - Bluetooth device can and will interfere with wifi. This caused me to change from hotspot to station mode (connect mount to my local wifi). When it was running as hot spot - BT device caused frequent disconnects of my mobile phone. Not sure why is that - but as far as I can see - it can happen due to interference between BT and Wifi. Don't know if re-pairing of device will fix the issue (as suggested by net wisdom). For me it is not a deal breaker, but some might find it difficult to use device in the field (although reversing roles is always possibility - make your phone be AP and mount connect to it). In the end - it will need to be external app for the time being to create bridge between the two. I'll look into writing one (it's been while since I did any android coding).
  17. See these threads: I would personally avoid binning in drivers - as binning in software offers more flexibility. If you lack simple way to bin - download ImageJ - you can bin image easily with that tool.
  18. Just to keep you up to date - I purchased mini Bluetooth game controller thingy for android phone - this one: and paired it with my phone. It is very small controller - the size of thumb drive - maybe a bit bigger. It fits nicely in hand, but feel is rather plasticky (sure enough - it is plastic device). That thumb joystick bit is omni directional - but rather imprecise. Precise enough to be used as up/down/left/right control. I tried it with Gamepad testing app - and there it reports direction in fractions of 0-1 range. When moving left, right, up and down - that axis is always +/-1 but other has various values like 0.1-0.2 or sometimes very close to zero - like 0.005. When aiming for diagonal - both axis values get larger. It pairs easily with phone (it came without instructions) - and I believe nothing special will be needed to command the mount - at least left/right/up/down - as it acts as keyboard arrow keys if in that mode (it has two modes - in one mode - joystick will emulate mouse - I think that "game" mode is one with arrow keys). This model has internal battery and charges via USB port (not sure which USB connection is that - but I had the cable that matched). Will test it with mount and synscan app and let you know if its working.
  19. That depends on what your target resolution is. I personally view OSC sensors as having twice the lower sampling rate than their pixel size would suggest. I don't debayer in classical sense - I extract red, blue and two green subs out of single raw sub after calibration. Then I stack each color as if data were coming from mono + filter device. Green will have twice as many subs - which is good as green carries the most luminance data and it will have best SNR. Once you separate colors - then you can additionally bin your data if there is need - but this really happens only with long focal length scopes where you need to bin more - like x4 if we consider regular pixel size.
  20. That might not be true. Binning is a form of resampling, but resampling is not binning. There are many different methods of resampling (integer resampling - which can be equal to binning, then bilinear, bicubic, spline, lanczos, to name a few). Binning is exclusively done by adding adjacent pixel value and is done prior to post processing - while data is still linear and not stretched. Out of all mentioned methods to reduce sampling rate - only binning has certain properties. It does not introduce pixel to pixel correlation in final image and has known SNR improvement factor - which is equal to bin factor. Other methods introduce correlation in pixels - which introduces blurring of the image that reduces detail and provides larger SNR improvement, or keeps sharpness but does not improve SNR as much as binning. When binning stacked color image after debayering subs and stacking them - SNR improvement won't be the same as binning individual subs in adequate way for OSC subs, then debayering them and finally stacking them.
  21. I have one more estimate - I'm sure that object was closer than Saturn at the time recording was made Max distance is estimated to 1 billion km (size of trail / duration of sub / speed of light).
  22. Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth
  23. That "in focus" thing got me thinking. How to best go about it. We have 85mm scope that operates at 455mm. That is about F/5.35. In order for defocus blur to be 1px wide at best - focus position for this object needs to be at most 5.35px away from infinity focus. Pixel size is 3.76µm so we have defocus of max ~20µm or 0.02mm We have known formula that 1/f = 1/f1+ 1/f2 - where F1 and F2 are distances of object and image from the lens. 1/455mm = 1/455.02 + 1/X X = 103512955mm = 103512.955m = 103.512955Km This is even more interesting - so minimum distance according to trail being in focus is 100Km.
  24. We have no idea how large trajectory we are talking about here. We must set some bounds. I highly doubt that it is balloon based on "unresolved object" constraint. Balloon will be at least 10-15cm in diameter in order to be able to fly - and we already shown that object of that size needs to be at least 10 kilometers away in order to be unresolved in the image (and that is with assumption of 2"/px resolution when in fact it is closer to 1.6"/px - so it must be even further away). I don't think that balloon will reflect enough of light in order to produce such trail. Eyeballing length of trail - it is about 3° long. Trail that object traveled if it is larger than 10cm is at least 500 meters (constant distance to telescope - path in plane perpendicular to optical axis - shortest possible path). It did so in 3 minutes or less so lower bound for speed is ~2.8m/s Another interesting point that we did not discuss - is being in focus. How far away it needs to be in order to be in focus like it is in image? Btw, I just checked - model rockets don't get very high - up to 2000 feet or there about. Sure, there has been records with rockets being put in space - like 116Km altitude but that has been done with special rockets and teams and what not.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.