Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I just wonder what is your experience with following: - light leak - position repeatability I guess that manual filter wheel one starts with won't be top notch in that regard and I wonder how did you handle these two?
  2. I guess it has to do with machining precision. Nowadays these are manufactured on CNC machines with very good precision - so there is no problem with backlash, but on the other hand - Crayfords tend to slip under weight and I guess that is the reason R&P are more popular as far as imaging goes. Dual speed is achieved the same in both - it is planetary type thing with little balls (friction type bearings) - on shaft itself.
  3. Absolutely none Well, actually that is not 100% true. It provides one with two small benefits: - it virtually removes need for precise tracking. Even basic mounts should be able to perform well in few seconds of exposure (one of reasons seeing is defined as FWHM in 2 second exposure - that removes mount performance out of the equation) - much better granularity in selection of smaller FWHM values. Spells of better seeing can last only a few seconds and during the course of regular DSO exposure - that will be averaged out with regular seeing and worse seeing as it fluctuates about mean FWHM value. If you take really short exposure of 1-2 seconds - then you can select those spells of slightly better seeing to get better overall FWHM after stacking - but that will of course result in many rejected subs because worse FWHM. In any case - I'm not sure that lucky DSO imaging is valid approach after all. I think that better resolution images can be achieved with better use of algorithms and careful processing of data. I believe that better SNR will beat slightly smaller FWHM in regular stacking - especially if one utilizes "non regular" stacking to get best of both worlds - better SNR and smaller resulting FWHM.
  4. You only really need one sub from any previous session to determine sub exposure length, but it is subject to similar constraints as total exposure calculator. LP levels fluctuate from night to night and even during the course of single session. Also, to be able to calculate sub length - one needs properly calibrated sub and correct e/ADU value to be able to measure background signal levels in electrons rather than ADUs. In that sense, such tool still depends on user having additional knowledge. Sharpcap, as @david_taurus83 mentions has option to determine optimum exposure length - and it works good as it does not require additional input from the user. Software does all needed measurements and calculations directly from camera and sky conditions.
  5. What does this really mean? Best explanation that I managed to conjure in my mind is something along these lines: Or rather - that space is bound in those directions with very low bound on distance and that any physical object is actually like wound wire in direction of these curled dimensions - which poses a question. I'll try to explain it with infinite room idea: let's imagine room that has 6 sides - like interior of the cube. Each side of this room contains a door, and when we exit any of those doors - we reappear thru the door on the opposite wall. This sort of 3d model of closed / bounded hypersphere kind of 3d universe. Now if we just take one direction in above room - and put object that is larger than the room itself - what will happen? Object will overlap with itself. Does this happen with those curled up dimensions - because as far as I can tell - we can't perceive them because they are tiny and curled up - and tiny does not mean that they are limited in extent - it just means they are bounded - but one can move infinitely far in that direction.
  6. I remembered one tool that is desperately needed. Make a tool that will do proper color calibration on LRGB/RGB data.
  7. Well, no. If you want to do one as a way of learning all the bits, then by all means. If you want to build one to attract traffic to you website - again, certainly. In that case it is even easier as you can copy functionality of existing ones and improve on user experience so it is easier to use. If you want to make such tool so it will be useful to many instead of few - then I would say that maybe you should not bother as there is no simple way to do it. In fact - there is no way to do it and not because of you or people, but because there is too much unknowns for the thing to be useful in broad sense. Here is an example so you can judge for yourself if the tool is really useful or not and under which circumstances. What would we set as a threshold? Maybe 20-30% of accuracy? You want to know if one night's imaging will be enough for certain target and that is say 6h. If you need more that - say 8-9h than you'll need to split it to two nights. Here comes the question - what will be your AOD on the night of imaging? Aerosol optical depth can vary between 0.1 and 0.5 on average. 0.1 being very transparent and 0.5 being very opaque. Look at this chart: This is current situation and over Europe we have all the cases from below 0.1 to 0.5 and even higher. Now, if on a night of imaging you have say 0.5 AOD - that is 0.5 * 1.086 = 0.543 magnitudes fainter target. We know how to calculate magnitudes so 0.543 = 2.5*log(ratio) => ratio = 10^(0.542/2.5) = ~1.65 So you need 65% more imaging time to gather the same amount of photons. Your tool's accuracy depends on condition that you don't know prior to imaging and it can lead to 50% difference under regular conditions, and how many people you know that check AOD forecast before their imaging session? I haven't even touched upon light pollution and how it changes over the course of the evening and how it is also affected by local transparency conditions, and how hard it is to get brightness data for your targets to be able to do SNR calculations. Sure, you can make useful tool and use it in your project - like estimation tool of order of magnitude of imaging time needed (or as comparison between different setups), @dan_adi has done so for exceptional amateur image of gravitational lensing effect, but there are so much variables that you must understand and know to be able to do such calculations with any sort of precision.
  8. Such tool is very tricky to make. First thing - you need to have quite a bit of knowledge in order to make one and second thing - as soon as you start making one - you find yourself in a very strange position. In order to make tool useful to many people - you need to approximate things which makes tool very imprecise. If you on the other hand try to make tool actually worth something (precise enough to be useful) - then maybe less than 1 in a hundred will be able to use it.
  9. Sub duration will depend on your light pollution (or other noise source) and read noise level of your camera. You want to swamp read noise with any other noise source (all other depend on time and read noise is constant per exposure). You want to swamp it by factor of at least x3, preferably x5. Easiest way to estimate this is by examining single exposure you took in expected conditions and measure background levels and from that derive LP noise. If you use uncooled camera - you could also examine your darks to see if dark current is bigger factor than light pollution. Total exposure time will depend on many factors and there are several calculators that will help you determine total exposure time needed. You will however need to estimate brightness of the faintest part of the target and your LP levels and set wanted signal to noise ratio that you'll be happy for faintest parts. Most people don't bother with either point 1 or point 2 and settle for some sensible exposure length - that is long enough but not too long so they get issues from tracking/guiding (usually in rage from few minutes to 10-20 minutes). You will need longer exposures if you have CCD vs CMOS and you will need longer exposures if you do narrowband versus regular OSC/LRGB imaging. Both of these impact ratio of read noise to background noise. Also - people shoot for some amount of time - like one night and examine their data. If they are happy - they make an image, if not - they add another night worth of imaging or even couple of nights.
  10. Ok, I think that most of confusion in threads like these is coming from (me) following: - I tend to advocate approach that is mathematically and physically correct, even if it is not implemented in software - people tend to discuss what is available in software as a means to get the result they want, even if it is not completely physically correct Sorry that I contributed to confusion, as thread title clearly says what is being discussed - SPCC vs G2V approach and not the correctness of either method.
  11. Why would this be? If you remove background from each of channels separately without scaling the signal - it won't mess up color calibration at all.
  12. Yes, I would not do it - there is really no need to do it and it skews the data.
  13. Depends how linear fit is implemented. I created my own version (needs aligned and cropped subs to work properly). It is a bit more advanced as it also handles first order background gradient - it corrects the "tilt" or gradient direction between subs so it makes gradient look the same in two subs. If you choose sub with minimum gradient as reference - all others will have their gradient minimized as well. It also makes gradient removal much easier on the whole stack as gradient direction will be aligned and the same - so it will look linear in the final image as well. My version works quite well. Not sure how PI or other software have this implemented.
  14. Linear fit is extremely useful tool but it is being misused here. It is actually very useful tool for preparing subs for stacking. Over the course of imaging session as target changes position in the sky - two things happen: target changes brightness and level of light pollution changes from sub to sub. Target changes brightness as it changes altitude while earth rotates and "atmosphere number" changes. This part is multiplicative in nature. LP levels also depend on part of the sky in question and as target "tracks" across the sky - LP levels change. In general sense, image signal can be written in form: a * target + b where a is constant that depends on atmosphere thickness and sky transparency and b is constant that represents average level of light pollution - or background signal. It is easy to see that above equation is linear (ax+b) and linear fit changes a and b coefficients in one sub to match those of other sub - thus making them stack compatible (same signal strength and same background level). This is misused as color calibration when linear fit is performed on 2 other color channels against selected one (like fitting R and B to G). It leads to "gray world" type of artificial white balance and also tries to make background color gray as well. "Gray world" white balance algorithm is based on premise that average color in any image is gray (there is as much blue as red and green so that average pixel value is really gray) - but that is flawed assumption. Also - assuming that background LP is gray is also flawed. In most cases it is orange in color and should not be scaled like that - but rather removed altogether from the image.
  15. .... meanwhile (also using linear motion components) ....
  16. I completed prototype of first in series of 3 of non rotating helical type focusers. Actually, I'm not sure if all three could be called helical, only this one really is, but at least two will be operated the same way one operates helical focuser, neither will rotate and all will operate on basis of helix This is actually upgrade on my previous design that I built for little hand held scope that I 3d printed around 40mm binocular objective. Here are the specs: It is 1.5" focuser (38mm clear aperture) with T2 thread at eyepiece end and M72x1 at telescope side. It has 40mm of draw tube travel and precision of 8mm per turn. This version weighs in at 290g, but that will probably depend on choice of components and printer settings (infill mostly). It took about 25h of printing time to print everything. I used Creality HP PLA. Most parts were printed with 0.2mm layer height except for thread parts which was done with 0.12mm. Infill was set to 55% except for drawtube where I went with 15% to save the print time (and rigidity is enforced by linear rods). Item consists out of 5 printed parts, and about half a meter of 6mm linear guide rod, or rather 570mm to be precise - it needs to be cut to 2x150mm and 3x90mm. 4x6mm PTFE lined bushings were used as well. No glue or fasteners were used in this version - everything was interference fit. Total cost is less than 20e Here are some images and video of action: focuser.mp4 Video has audio part removed as it is low quality - but focuser is not silent. It does produce sound of plastic rubbing on plastic. Helix part is 3d printed and is not as smooth as I was hoping, although overall performance is not bad by any means. I even applied some lighium grease (although I feel Synta-glue would be better option for this design ). Here is internal construction and individual parts: Two body parts are held together with 3 pieces of linear rods (those 90mm long ones): Focusing ring sits in between above two parts (it is fairly simple design): it has 4 starts trapezoidal thread (30 degrees) with 8mm pitch each https://en.wikipedia.org/wiki/Trapezoidal_thread_form Draw tube looks like this: With two linear rods supporting it (and going thru bushings embedded in housing). It is mostly single piece except for that part on top that holds ends of linear rods: As this is first prototype, I've already identified couple of points that need improvement. 1. Too shallow holes for linear rods in draw tube. Both top and bottom parts that hold linear rods are 10mm wide. Bottom one has 6mm holes all the way, while top one has only 7mm holes that are capped on one end for aesthetic purposes - that side is visible as it is facing the eyepiece, other is inside tube. It would be better for those parts to be at least 20mm wide. That will add to rigidity and interference fit will be stronger. 2. Tree pieces of linear rod holding body together are rather nice idea - but in order to assemble the thing - interference fit can't be very tight (even as is, I needed to use sort of plastic / rubber mallet to force things together). Any play in focusing ring depends on spacing between top and bottom body parts and if they are held by friction alone on 3 smooth rods - they can budge by tiny amount creating enough room to create play in focusing ring. It does require some force to do so - but I'd prefer if those two parts were not so easily moved apart. Solution would be to use m6 threaded rod with nuts instead of linear rod. Holes would need to be smaller as any play will throw things out of alignment so that part must be tight fit for everything to slide and move smoothly. Also - one side should have double nut as front and back nut should be tightened just right. Front nut would sit in 3d printed part in hex hole so it would not rotate on its own (and can even be glued in place while still enabling disassembly) - but back nut needs to be tightened against another nut so it does not come loose. Btw, I think this is very nice idea for small refractor build - like 60-80mm range, or even as an upgrade on stock focuser on something like ST80. If anyone would like to try building one (hint @Chriske ?) - I'd be happy to post any files required (I have everything as a single FreeCAD project, but can export individual components). I'll probably upload it to one of 3d print model sharing services in near future as well.
  17. Very little info on 9-27mm version and I think I'd prefer that one for my little Mak - to be lunar EP, but I'm not completely sure of how good it is.
  18. I just read the original post - and sure, if you already have OSC+dual band for color - just shooting dual band + mono for luminance is excellent option. It will give you by far the best SNR per unit time of all combinations (much like lum filter does for LRGB).
  19. Yes, and I think that it is most suitable use for dual and tri band filters - as a luminance for NB images. Say you have Ha/OIII dual band filter - then, all you need to shoot is: 1. OIII 2. Dual band filter Spend more time on OIII if it is fainter of the two (and usually it is). Use Dual band filter as luminance. Use OIII data as teal component of chrominance and Dual band - OIII as Ha or red component of bicolor image. Just make sure that you have matched exposure lengths for 1 and 2 so you can easily subtract the two. Do subtraction after background removal phase while data is still linear.
  20. actually that depends. Take planetary scope - one with long focal length, and take budget EP in form of plossl or ortho at good focal length for planetary (with long FL scope you'll have enough eye relief to feel comfortable) - and then compare it with any sort of premium eyepiece that is not of good focal length (either too much or too little magnification).
  21. I don't subscribe to 3 EPs per scope approach. I can easily list 5-6 eyepieces one "needs" for general purpose scope like 8" f/6 dobsonian. First you need very low power eyepiece. I used 32mm Plossl in that role, but now use 28mm ES 68 degrees Then you need general DSO eyepiece that is around 17-18mm for this scope. Then you need dual use EP that is around 11-12mm. It will be high power EP for globulars, or very low power for planets. Last - you need at least 2 planetary EPs - like 5mm one for good seeing and 7-8mm one for average seeing conditions (in worse than average seeing - use 11-12mm one). I prefer not to use barlow and like around 60 degrees AFOV (so no very wide options, although I don't mind them - I have 82 degrees ES line).
  22. That is just software effect and has nothing to do with data - it happens because of this: Imagine you have background that is not flat calibrated - and you have dark edges and brighter center - and you set your white point and black point according to those. Now you apply flat correction without readjusting black and white point - background will turn to uniform gray as dark areas will no longer be dark (small in value) - but rather corrected. Or in numbers - say you have target at 1000e, bright background at 100e and dark background at 10e - so your white point is 1000 and black point is around 10e - and you divide with flat which has 1 for bright background and 0.1 for dark background. Target and bright background stay at 1000e and 100e / 1 = 100e, while dark background changes from 10e to 10e / 0.1 = 100e. Now all the background is uniform 100e and 100e - but black point is still at 10e and background looks greyish because of this.
  23. I now think I understand what is the root cause of the issues with 294. Poor implementation in sensor itself (or firmware). If I recall correctly - these are sensors have very small pixels that are "joined" into large pixels in groups of 2x2. In that case - saturation must be handled carefully. I've described case where some pixels hit saturation and others don't - it is because of random nature of the signal. This can also happen inside group of 2x2 pixels. Here is example - say we have signal that is 100e on average but clipping is at 110. In group of 2x2 pixels some will record 96e some will record 105e - and it might happen that one pixel records 112e - but it can't - as clipping is at 110e. When we join pixels - we can assume that clipping is now at 440 and we will have: 96 + 105 + 102 + 110 as actual pixel value so it will be 413 - less than 440 / clipping value - so we can think it is all ok - but it is not, as actual pixel value should be: 96 + 105 + 102 + 112 (this one got clipped to 110) - or 415 We get non linearity at high values without clipping. This probably happens only on some gain settings close to switch when FWC is small or something and is probably design / implementation flaw.
  24. Actually no. Linear fit is the wrong way to go about things. It imparts color of light pollution onto the image. One wants to remove background rather than "equalize" it for accurate color information. I would say that following would be appropriate order of things: 1. remove background signal from each channel 2. (optionally) do G2V scaling 3. Do method of color calibration of the image data In fact - G2V scaling is a color calibration method - although inaccurate and highly rudimentary but certainly beats doing no color calibration. Better way would be to use range of star types and do computation of transform matrix rather than just 3 values. 3 scaling values that we get from G2V calibration is just "diagonal" matrix - or this: Instead of computing all 9 values of matrix - we computer just main diagonal and treat other members as zeroes. But this is still not the best way to do color calibration as star colors are rather limited set. Best way would be to derive general type of transform matrix for large chromaticity set and then do above photometric approach as correction factor.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.