Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Yep, there are 6", 8" and 10" versions
  2. Talk about bed adhesion Test worm for azimuth movement of polar wedge is being printed Yes, that thing is 90mm tall and only about 8mm thick with different sections being printed at 0.2mm and 0.12mm
  3. I'd say - decide based on aperture. Up to say 5" go with Mak - above that go with SCT, of maybe consider classical Cassegrain
  4. That above is clearly M57. It just depends what catalog one is using. Maybe in Messier catalog it is labeled as M82, but in "Mess with people's heads on your favorite astronomy forum website" catalog it is clearly labeled as M57
  5. That is a bit more complex and in some areas a bit different than I intended. Mine would just track in RA, so it would not include counterweight. Wedge part is a bit more elaborate though.
  6. Heard of it, but I was not aware they also have star tracker. Last time I checked it was for motorizing EQ mounts, right?
  7. I'm using FreeCAD. At first, I had some difficulty using it. I was using version 0.19. At that time I was mostly trying to print decent thread and was using thread work bench - which turned out to be flawed. Also I probably did not understand workflow in part design work bench very well. After few days of using it - everything just clicked into place for some reason. I stopped using thread workbench as I figured out that I can create threads more accurately myself with a bit of help from wiki (I just looked up metric thread profile and everything is well explained on the page) and I did not need to mess with industry tolerances / clearances and just use clearances that my printer requires (like 0.2mm clearance from nominal tread diameter). I did run into topology naming problem - but since I'm computer programmer - it immediately became clear what the issue was and how should it be addressed. In fact, if one uses parametric approach to modelling (using spreadsheet and calculations to produce model values) - then topological naming problem goes away. At the moment, I'm rather happy with FreeCAD and what it can do. It is a bit crude UI wise, but quite capable piece of software. I ended up using printed bearings for prototyping purposes and will probably advise using regular things (not too expensive) in final product for smoothness of motion. While printed variants work - they lack the finesse / smoothness of regular items, even in very low speed conditions. I have two more hurdles to overcome now. First is - modelling worm wheel / worm gear assembly. Worm gear itself is easy - it is just trapezoid profile revolved, but worm wheel itself is another method. Spur gear, for example - does not have simple profile: It is curved instead of trapezoid shape - and this is because it is not "linear" but rather follows circular path (two circles - smaller and larger gear). With worm wheel / worm gear - one is simple linear / trapezoid profile, but other must be similar to above but little less bent - it is shape that is "cut" by trapezoid as it revolves while wheel itself is turning. I really need to consult some good machinist handbook in order to find proper profile for this. Another thing is - clutch. I need to design some sort of clutch that will be tightened when wedge is in position and secure it from moving. All plastic parts have low friction - they are not quite smooth, but are not suited for breaks either. One solution that I've come up so far is to simply cut pieces of paper and glue them onto surfaces that will act as clutch surfaces. Paper on paper has very high friction coefficient and is very affordable solution.
  8. Ok, so here is red flag for me. You don't get to choose gamma. Sure you do - if you consider calibration to be tweaking rather than calibration. It is not arbitrary process, it is meant to ensure same output for same input (input being digital values of recorded image). Gamma is meant to ensure perceptual uniformity. Sensors that record light values (images) are more or less linear - but are much more linear than human vision. Human vision responds much like most sensory perception in humans (pressure / touch, temperature, sound, etc ...) it is logarithmic / power law in its nature. In another words - we perceive something being twice as bright as something else - it does not mean it's values are twice as high numerically (twice the number of photons) but rather its magnitude (log scale) has increased linearly. sRGB standard dictates that numeric values correspond to physical brightness with gamma of 2.2 (roughly, gamma function for sRGB is defined slightly differently and is only approximated with gamma of 2.2). This means that if sRGB pixel values in the image for one pixel are say 10 and for other are 20 - this does not mean that luminosity (emitted photon count) of screen pixel is twice that of first pixel - but rather should be gamma 2.2 corrected. You can't arbitrarily use gamma of 1.8 if you want to adhere to the standard. That is why I mentioned two points in above post about calibration. If you only want to get consistency across several of your screens and don't care about rest of the world - calibrate your screens to same values and pick those values as you please, but if you want to follow the standard - make sure you use standard values for calibration. If you set your screen to gamma 1.8 and then go on and create image that is uniform transition from black to white - and you use your own eyes to ensure that transition is uniform, and save such image. It will no longer be uniform on any computer screen that is even remotely calibrated (at factory). To show what I mean - here is gamma 1, gamma 2.2 and gamma 1.8 uniform transition from black to white: If you observe above image on regular computer screen - bottom gradient should look most natural, linear and uniform. If you calibrated your screen for gamma of 1.8 then middle image will look that way on your screen Gamma 1 won't look right on any screen - but this is how sensor sees the image, and if you take photometer and measure pixel values - they should end up being linear (if viewed on gamma 2.2 screen). Now thing is - only gamma 2.2 RGB values are in fact going to be linear values - as system expect image values to be gamma 2.2 encoded as that is standard. Look what sort of graphs I get when I plot above 3 images: Black is gamma 2.2, red is gamma 1.8 and blue is gamma 1.0 What should be linear in values - is actually pretty curved, 1.8 gamma is a bit less curved - and gamma 2.2 values are in fact - linear. Why? Because system is already expecting values to be encoded with gamma 2.2 Choosing to calibrate at gamma 1.8 - messes with already well established system - where pixel values are expected to be gamma encoded and drivers for computer screen are treating them as encoded and are presenting them with proper pixel intensities so that our brain sees them as linear (although intensities measure by photometer or sensor are not linear). It is no wonder your processed images don't look good on other display screens if you processed them on system which forces gamma of 1.8 although expected gamma is 2.2 @powerlord Out of interest, which gradient in above image looks the most linear to you?
  9. Thing is - it is much easier to calibrate monitor for gamma and black / white without any aid. You have a bunch of websites that allow you to do it and OSs have inbuilt helpers for this as well. I don't think it is good idea to produce another version for those that don't have calibrated screens - because you simply don't know in which way their calibration is skewd Odds are that it won't look good on majority of screens that are not calibrated in the way you intended. Do try that experiment with processing data in different conditions.
  10. I would not put it like that as I don't see analogy. If you want to use audio equipment and draw analogy there for calibration - analogy that I'd be happy with goes like this: Imagine you have a violin player, regular set of speakers and monitor speakers in one room and you are sitting in that room blind folded. Calibration in this case is sound equalizer placed before monitor speakers and regular speakers (two different equalizers, one for each set of speakers - by the way - it just occurred to me why they are called equalizers in the first place ). If calibration is proper, then you would not be able to tell which one is playing - violin player, or recording of them played on either set of speakers. Say they are chosen randomly to play for a minute - you would not be able to tell as sound would be identical regardless of who is playing. Similarly with computer screens - you have two different manufacturers, so two different devices - and a painting. Properly calibrated screens should present same color / hue and same intensity as painting itself - light emitted (or reflected in case of painting) - should be of same intensity and same spectral response (not spectrum, but same spectral response in our eyes). That is the point of calibration - to make things uniform / the same, and also to make it same as original source (this bit also has additional component of how original source was recorded - so we need to calibrate recording device as well and that is what conforming to a standard does).
  11. In fact - here is interesting little exercise for anyone interested. It does not involve calibrated screens - so it can be performed by anyone. Process and image during daytime with daylight in your room. Remember how you processed it and save image somewhere. In the evening / night - make your room dimly lit or maybe remove all light sources except computer screen (like when watching a movie / theater experience) and try to process same data to the same level as you remember it (don't look at your previous processing yet). Once you are done - save that one as well. Now make two comparisons - one in daylight and one at night time with same conditions as you used when processing. You will find that your images are: 1) quite different 2) one looks good in daylight while other is rather poor, and similarly other is looking good at night, while first one is poor (night time processing will be understretched in daytime conditions and daytime processing will be over exposed in night time conditions). Above goes to show that if you work in different conditions - you need to calibrate your screen for each of those conditions and apply profiles depending on actual condition at that time.
  12. Think of calibration as "conforming measuring device to a standard". Imagine we have rulers but they all have slightly different scale. Some of us have a ruler that is close to metric standard, some of us have ruler that is further from metric standard. Now you post your measurement of a cube and say - this cube has side of 23.4mm. What would you rather do - measure it with ruler that is calibrated against metric standard so that what you've written is actually correct, or would you accept that your ruler is slightly off and you are posting wrong measurement - justification being that most of other peoples rulers are skewed in some way and no one will correctly verify your measurement? In my view - just because other people don't calibrate their screens - is no justification not to do it yourself if you have the gear for it. However - you should be careful and understand color management and why we calibrate screens - in order to do it properly. There are two primary reasons why we calibrate our computer screens: 1. to make things uniform across viewing devices 2. to make viewing device / screen match actual physical color / light and our perception First part is much more important to artists and teams working together - but if done on its own (without other part) - can lead to "wrong" results Bottom line - in order to properly utilize color calibration of your screen - make sure that you keep environmental conditions the same and calibrate against those (I see a lot of difference in same image depending on whether I'm viewing it in daylight or in dark room at night) and make sure you do proper color profile / color space management when editing your images. Make sure to convert to sRGB color space when saving an image as that is standard color space on internet at the moment.
  13. To be honest, I'd be happy with either - small lathe, cnc machine and such are on my wish list. It appears that I've been bitten by some sort of manufacturing bug
  14. Irfanview has a plugin for viewing fits - but it won't do any stretching on them. FitsLiberator can stretch data, but not sure if it will debayer image for preview
  15. I was able to print fine metric pitch on mine without too much issues. I do print those in vertical orientation and use random seam position. I printed M72 x 0.75 (filter thread for my samyang lens) just fine. Layer height was 0.12mm That is way too much. In material, something like that costs maybe few quid at most. I think that many will find investing in low cost (but quite usable) 3d printer like Ender 3 (mine is V2) quite cheap in the end. Printing just few items can justify the expense if we go by those prices. I'm having quite a bit of fun with mine. So far, astronomy related, I've printed: - bits for counter weight system for AzGTI (hand screws for adjusting position on M10 threaded rod - these hold M10 nut inside, system for centering and holding together some dumbbell weights of 0.5kg each with 28mm center bore) - long dew / light shield for Samyang 85mm T1.5 - F/2.4 aperture stop for same lens (this one screws into filter thread) - cable clips (work in progress) to hold in place power and USB cables on AzGTI + Samyang setup. I'm yet to design mounting parts for those clips - I need two. One for vixen dovetail and another to attach to AzGTI itself (maybe I'll use M10 threaded rod for that?). Planned so far: - Focus system for Samyang lens - motor bracket + gt2 gear to go on focus ring - Motor brackets for auto focus system for my two other imaging scopes - RC8" and TS80 apo - Lowspec spectrometer - Spectroheliograph like Sol'ex (I've some other variants if I remember correctly, so I'll choose at that time) - Star tracker - different kind of bits and bobs - I want to make small refractor from surplus lens, and I'll need different adapters to connect everything together.
  16. It's byproduct of printing layer by layer. Each layer has to start somewhere. When stepping to next layer printer stops extruding plastic (retraction), moves to start of new layer, primes nozzle (moves filament forward) and continues extrusion. This retraction / priming is not exact science (but linear / pressure advance helps) and a bit more plastic is often extruded which creates a bump in that place as there is a bit more material. Here is picture of it (a bit exaggerated, but it is still there even if your printer is tuned properly):
  17. Peak PE is quite easy to measure from single exposure. You need to create exposure that is as long as period of your RA drive and measure length of resulting streak. For best results - align RA to say X axis of sensor and then just measure length of streak in X axis (any Y movement will be due to polar alignment error). Alternative - you can guide but only in DEC and disable RA output. After converting from pixel to arc seconds - this will be P2P PE.
  18. I think that for star tracker - cold is much more problematic than hot I can see people using it at -10C but not at +40C. I'm planning to make parametric models with clearance as a parameter for all important bits. Btw, I just printed 51108 axial bearing (40x60x13) In this format: + this With 0.1mm of distance between groove and ridge. Works fairly well, and rotates under quite a bit of load. Only issue I have is that I had seam on "aligned" and it now has one place where it binds up as seam comes against the seam on other part. I really need to see how to adjust seams to that they don't end up on sliding parts.
  19. I just finished printing some bearings. 608 for example is working OK, but being print in place design - tolerances are a bit loose and there is quite a bit of play. Axial bearing is easy to print as it is not print in place design. Radial is much more tricky and I'm now looking into doing design that is assembled instead of printed in place. This will make it much easier to lubricate and clearances can be tuned in so there is minimal play. I guess that I can make 3d printed versions the same size as actual bearings and then make it matter of choice? I'll be using 3d printed versions at development phase (except for 608s as I have bunch of them on hand so I can try with real bearings and compare) and replace it for first complete prototype.
  20. I'm having idea to design low cost 3d printed star tracker. Back in the day - barn door trackers where low cost solution to get into wide field astrophotography. I'd like to go about creating something similar - with use of 3d printer. I have idea how to go about tracker itself, but now I'm at wedge stage and I'm wondering following: - should I go for regular bearings or 3d printed alternatives? At the moment, I think that I'll be needing something like 6 bearings in total to make wedge operation smooth. 2 needed for azimuth axis, 2 for "altitude swing" and 2 small ones to make azimuth worm smooth to use. Some of them will be radial type and some axial. So far I've figured that PLA on PLA does not have all that much friction. A bit of lube and it can be very smooth motion. None of these bearings will be particularly load bearing and won't turn with high speeds. By the way, when I say 3d printed bearings - I don't mean ball or roller bearings - more like sleeve bearing type - where there is just smooth contact surface between two pieces. Now, regular bearings that I would use are not really expensive - like ~1-2e per piece (depending on type). Question is - which one would you rather use and why?
  21. I'll probably revisit this idea in near future as I now have all the ingredients needed to try. Have artificial star, have 3d printer to print adapter to attach that to long FL scope (4" Maksutov). Have lower focal length telescope - 80mm f/6 that provides about 3.25 degrees on diagonal - enough for 770 seconds of tracking - which is more than one worm period of my HEQ5. If need be, I can switch to 85mm lens for recording, but probably need to model geometric distortion of that lens first.
  22. Not in that kind of budget. Dob + wide field scope is still best option for covering most bases.
  23. If you compare RC8" with your current setup and you bin x3 RC8" one and you fit your target in FOV in both cases - you are looking at reduction of integration time (to reach the same SNR) - that is 1/4 - or 19510 / 86400 = 0.226 of current time.
  24. If you compare two different setups - you should use Aperture^2 * Sampling_Rate^2 formula. If you compare same setup - native, binned x2, binned x3 and so on - then you can use simplified formula - which is SNR = bin factor, or exposure time decreases by bin factor squared. These are really the same - in second case, we keep aperture the same - so it can cancel out from equation and we are left with Sampling_Rate^2 part. Binning increases numerical value of sampling rate and that is why time increase = bin_factor^2. Hope that part makes sense. I find that most people best understand it by example that they can replicate, so here is what you can do to get the sense of resolution vs blur. We can take some image - let's take M51 by Hubble team as our example: It is very detailed image - very high resolution image. Look what happens if I take that sort of resolution in the image and I reduce it by factor of 3 and then enlarge it back by factor of 3 This is reduced image: When we scale this back up and compare to original - we get: Loss of detail and quality is obvious. When you take image that is already high resolution (not by pixel count but by level of detail), and you perform this - you obviously loose resolution. But what happens if we do that on image that is already blurred? This is image that much more resembles images taken by amateur astronomers (much lower res than Hubble's one). Now we do the same - resample it to 1/3 of its original size - we get this: Now, let's enlarge this small image and compare it to our blurred original: Look at that - there is simply no difference between the two. It really does not matter that we are so "close in" with target - if we don't capture the detail. We don't need to "spend" all those pixels capturing the target as there simply is no detail for all those pixels - 1/3 size image is enough to capture all the detail and we can recover "resolution" (or rather pixel count) - simply by enlarging image back - it will be the same as original one at "high resolution" (or rather high pixel count - resolution is really not pixel count but rather sharpness of the data). You can yourself easily replicate above results - just use some high quality resampling (I used Lanczos) and Gaussian blur.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.