Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I'm after some info on what people work with in their observatories. Need info on used pier height and obsy space. Will it be enough to have 2m x 2m deck for mostly imaging with "regular" sized scopes? Also, what sort of height should I go for with ROR deck? Here is a bit more detail: I have 3m x 5m allowance for auxiliary building - this is ground coverage. It needs to be max 3.6m tall - so it is just single level small shed type of building. I want to do some sort of gallery in it that will be deck for scope. I would also like to have space under gallery utilized as a storage or similar - and not wasted. In fact, now that I'm thinking about it, will 1.8m x 1.8m be enough? What about minimum pier height if I want to observe (while seated)?
  2. Exactly. Look at these two samples: Now, let's for a moment do analysis of what we see here. This is my reasoning. In gray nebulosity in above sample we don't see much noise. This leads me to conclusion that this part is high in SNR (not overly, but enough to be rendered smooth). It is gray as well - which means that both R, G and B signal is of equal strength - so SNR is similar if not equal (maybe red is a bit stronger as this patch is more beige than gray). Now in second image - I can clearly see grain in red. This means that this patch is lower SNR than above patch. Lower SNR means greater uncertainty in signal precision and also signal being lower. Do we have enough SNR there to be certain it is redish? In any case - you should not worry of your rendition not being good - I just wondered what would be explanation for these red tones.
  3. I just love the way image has been stretched. You can see faint things, but when you look at M31 core - it just looks like a million suns shining at you. Something tells me that if I were floating in intergalactic space, close enough to M31 to see it at that size - view would be very similar to this image.
  4. I wonder, why do these dark structures end up being red in images? Are they red in real life, and if so, why? On the other hand, could that be imaging artifact? This is my reasoning - this is gas and dust that is reflective in nature - so very low levels of light scatter of it and that light is predominantly from surrounding stars. We can see them as low intensity reflection nebulae. Most reflection nebulae are either gray or bluish. This is due to Rayleigh scattering as blue part of spectrum is scattered much stronger. We can see that in this Iris image from APOD: Original page: http://sai.msu.su/apod/ap051229.html Why do dark structures appear red then? Two possible solutions: 1. atmosphere influence. Here we have another "round" of Rayleigh scattering going on, but this time light is scattered away from our targets. This is reason why everything we record is shifted towards reds - lower "temperature" colors (reason why sun looks yellow rather than white and orange/red at sunset / dawn). 2. Light pollution - which is mostly red on cameras - probably because of its spectra and how it interacts with atmosphere. Might change with wide spread use of LEDs. 3. Just an artifact of processing. Trying too hard to show dark structures as signal instead of just being dark / light blocking patches. Too much saturation that goes into reds? Above images are interesting to case study this phenomena - it contains both gray parts and darker parts turning red for some reason. Why do you think this is?
  5. Maybe something like this would help? https://www.bhphotovideo.com/c/product/1247783-REG/blackmagic_design_convcmic_hs_micro_converter_hdmi_to.html
  6. I just wanted to know what sort of FWHM were you getting with that scope. That is important for sampling rate - I have feeling that even 1.1"/px will be oversampling but that really depends on usual FWHM that you get in your sky and with your mount / guiding. Ideally, sampling rate that you should target is FWHM that you usually get (or best values if you want to "be ready for those perfect nights") in arc seconds divided with 1.6.
  7. In very specific circumstance 6 hours at F/2 will be equal to 73.5 hours at F/7 - if you take your scope and stop it down to F/7 by using aperture mask of ~57mm. Alternatively you can take ~57mm aperture refractor with 400mm FL, attach same camera and yes in ~70 hours (a bit less - no central obstruction and better light transmission) - you'll get same result. Other scopes at F/7 will probably need less time than that, but FOV might not be the same. Alternatively if one uses different camera - F/7 scope can potentially capture same image in even less time.
  8. You have to park to home position - steppers need to do it since this has to do with stepper - mount synchronization and not scope/mount. EQ6/HEQ5 maintain their position information by counting stepper ticks - it always starts at zero ticks when mount is powered up (no encoders), so it needs to be "reset" to this home position every time as this "returns stepper ticks to 0 position" as well (in reality its not the motors that count the steps - it is firmware - each time it tells stepper to do a step it either increases or decreases a counter - depending on direction of movement, it knows that each micro step is equal to certain number of arc seconds - something like 0.2" or similar - depends on the mount, so it can calculate exactly where the mount is pointing from tick count multiplied with single step angle for each axis). PEC will be enabled in EQMod once you record it, but there is option to disable it or choose another PEC file, so there should be no problems if you loose synchronization for ever reason - you just either record a new one or tell EQMod to use different PEC file - one made by PECPREP. Look at above image that I posted - it has all the controls related to PEC - stop / pause, load, unload and so on ... screen shot is from EQMod manual in pdf format so have a look there as well. Just a word of caution - if you now record PEC in EQMOD and decide later to switch to PHD2/PECPREP version - remember to turn off PEC when you record your log without guiding for PECPREP.
  9. Nice. Does your mount have PPEC feature or encoders? You need to check your manual, but issue with EQMod PEC is that you always need to park your mount after you are done unless there are encoders present. This is because synchronization must be kept between where computer thinks the mount is and actual gear positions - otherwise PEC will "correct in the wrong place". For this reason I was not able to use both PEC in EQMod and my hand controller - as it is simple syntrek version and does not have park to home feature. EQMod does have park to home, but in case of crash or power failure or anything that powers down the mount or computer while mount is operating and not parked - will void pec and new one must be recorded. PPEC can be loaded in mount itself and won't be lost, but I have not done that so I don't know exact procedure.
  10. You can either record it in EQMod - that will be "automatic mode". Just hit "record" while you are guiding and record for at least 4-5 worm periods (that would be around 40 minutes since your worm period is ~ 480 or 8 minutes). Another way to do it is to start PHD2 and tell it not to output guide commands - there is option to disable guide output, but have PHD2 logging turned on. Then do about an hour of logging and just tracking guide star (at declination 0 for best resolution) and I believe you need to hit the "time stamp" button once on EQMod control panel during this time (this will serve to synchronize PEC file later). After that - you need to load PHD2 log file into software called PECPREP and create PEC curve there for your mount. You do that by basically filtering out all frequencies that are not harmonics of worm period - and leaving just a few significant harmonics. This software has list of all significant periods for your mount that you can check. There are tutorials on how to do this on youtube. I think I prefer later method as it gives more control, but its much more involved - you need like couple of hours to get all done. Simple pec recording in EQMod is much easier to do.
  11. PEC won't solve this I'm afraid - it is too quick and it is not harmonic of base worm period. It can help you with guiding / tracking in general. If I'm correct and you are using 4.77"/px guide resolution (orion mini + ASI120mm) then you should be ok, guiding resolution wise for RMS of about 0.7"-0.9". In another words - once you get below 1"/px with your current scope - you can't really be sure if you are actually guiding at 0.9" RMS or is that figure reported due to guide system resolution being at the limit. For EQ6 belt modded mount, if you want to be sure about your guide figures, I would say you need something like 3"/px guide resolution (that is good for up to 0.4-0.5" RMS and mount will generally do 0.6-0.7" RMS at best - so you should be fine at those figures with guide resolution).
  12. Belt is a bit elastic. I tightened my by hands only but I had to pull rather hard on motor to get it there. Try magnifying glass approach - look at gear/belt meshing to confirm that it is tooth - tooth meshing and then tighten the belt (by hand - put some tension on it) and see if you've made a difference to meshing. There should be some difference but if you are not getting clear meshing - add more tension.
  13. Looking at your signature - you seem to have 8" GSO RC? What sort of FWHM do you get in your subs (in arc seconds) with that scope and said camera on EQ8 mount?
  14. Not necessarily. F/ratio is ratio of two quantities - aperture and focal length. Sampling rate is again ratio of two quantities - pixel size and focal length. Keeping sampling rate the same means keeping ratio of pixel size and focal length the same. Change focal length and pixel size and you will maintain your sampling rate. 2.4um pixel at 600mm is equal to 3.8um pixel at 950mm - here we changed focal length and we kept sampling rate the same. This is very valuable thing with integer multiplications of pixel size - as this is easily done either as hardware or software binning (only difference between two bin methods being amount of resulting read noise). For this reason F/ratio of telescope won't tell you how fast it is - you did not say what camera you'll be using. It is the reason why "slow" scope can be faster than "fast" scope - even with same camera (if you bin in case of slow scope to maintain or exceed sampling rate and aperture of slow scope is greater). Three variables play part in all of this - pixel size, focal length and aperture size. You can fix any two of them and play with third one to get different results (pixel size can be changed either by binning or by changing camera, focal length can be changed either by using barlow or focal reducer and aperture can be changed by using aperture stop. Alternatively aperture and focal length can be changed together - but not necessarily by same amount - if you change the scope). When choosing a telescope / camera combination - I have found that it is better not to look at F/ratio of telescope first, but rather set resolution and FOV that you want to work with, then choose camera (or use one you have) and from that determine focal length you need. In the end - get the largest aperture with that focal length that will both - illuminate with corrected field whole sensor that you'll be using and that you can afford and mount.
  15. Ah the old speed thing again You are both right. Really. It depends on how one looks at things and what remains constant. Speed makes sense when one keeps pixel size constant. Mind you - with AP speed does not increase like in regular daytime photography because we operate in "photon starvation" mode - very few photons reach us and there are plenty of noise sources that impact result - some of them are not related to aperture (dark current noise and read noise) and some are (LP noise and shot noise). Depending on ratio of these - we will approach "speed rule" from daytime photography - that F/stop up/down exposure up/down thingy that I never remember Speed makes no sense at all if one keeps sampling rate constant. This either means using different pixel size camera with other scope or using binning to some extent. If one keeps sampling rate fixed then aperture is what determines "speed" rather than F/speed or F/ratio. Is it worth getting 12" over 10" - depends if you have a choice of camera to pair either with and what do you want in terms of sampling rate and FOV.
  16. At first I thought so too, but Indigo, although compatible aims to be more than indi/indilib. They have nice explanation on their FAQ page ... Anyways, I'll give it a go at some point (once I assemble all the gear needed - some things are in the mail as we speak) and will report back here my experience with it.
  17. @gilesco Don't laugh I did not mean actual shutter icon - but rather tab page with controls. Just checked github code and that page indeed looks like simple imager ... simple glance at the code looks promising. It looks like it features all the needed controls - location for local storage of images, temperature, exposure time, delay, count, etc ...
  18. Just had a deeper look and I think this is the first one to try out. Official website does not offer much in terms of what is available thru web GUI but there is nice "shutter" icon in top bar - I suppose that can be used to take exposures and do simple scheduling?
  19. I've not discounted Kstars/Ekos completely. I might be running those on laptop / desktop computer and connect to RPI running INDI server instance. Beauty of RPI is that memory cards are cheap and one can have multiple OS installations - just changing memory card and booting up - one can have different environment. I'm mentioning this as I have now seen INDIGO. How come that there are two "standards" for this? Or rather, If I understand it correctly INDI is just messaging standard and INDI is reference POSIX implementation while INDIGO is another "more efficient" implementation of the same thing? They are supposed to be compatible, right?
  20. Good point, but I really can't afford to get involved in another project (have 3 ongoing things at the moment) and I need a quick solution for my needs. This is also somewhat different in that it needs running EKOS instance. I had a brief look at the code and server seems to be just a relay for RPC type api. I wonder why is it needed in the first place - maybe web client could talk to EKOS directly. I'm after lighter weight variant in any case.
  21. At the moment I'm thinking of either using Python or NodeJS to make a small server and Angular type web app to do the job. My server will talk to INDI server on RPI. Phone will load Angular app in browser from RPI (nginx) I'll just connect ASI178mcc to RPI at the moment and power both from portable battery - 12V ~20Ah would be enough for shortish couple of hours session. What I need is simple - shoot X subs, Y seconds each and store them on USB attached SSD or similar with RPI. Web app only needs to report progress (refresh every few seconds or so) and have start new sequence / abort current sequence options. Most of things will be hard coded for now (which driver to load, where to store images and so on).
  22. What @kens said - this is down to teeth meshing with belt on RA axis. If you open up your mount and take magnifying glass and observe how belt meshes with gear on motor shaft when the mount is tracking - you will see that it goes "tooth to tooth first" instead of tooth coming against trough. I had that on my HEQ5 after belt mod - it was 13.6s (tooth period for HEQ5) and I fixed it by adding more belt tension until there was proper meshing.
  23. This is rather normal and should not worry you. In fact, level of dark current is really low and corresponding dark noise is comparable to read noise in magnitude in exposures up to 5 or so minutes. Most important thing - it calibrates out nicely when calibrating properly. Here is stack of 16 dark subs from my ASI1600mm (version 2 not pro) at gain 139 / offset 64 / 240s scaled down and stretched to show amp glow: I have no issues with amp glow in my images after calibration.
  24. I'm late to this party, but would appreciate some quick info. I think I get how all of this works as I'm pretty experienced with computers, and here is what I would like to do: Get myself a RPI4, put some bare distro on it without GUI and INDI server. Use this RPI4 to connect my ASI178mcc and AzGTI and I need basic wireless functionality to slew the scope and take exposures - maybe create basic plan - take X exposures having Y length with Z settings (gain, offset and all of that). This is for my "fun & wide field" imaging rig that I'm putting together. It will use vintage M42 manual lens and small sensor cooled camera. Main goal is to keep it light/portable. Question is: how stable is KStars light Lite as I want to use my mobile phone and operate without laptop in the field? Will it let me do what I want to do?. Laptop is just another item to be powered that requires large external battery to operate for more than 1.5-2h.
  25. Longitudinal chromatic aberration is change of focal length with respect to wavelength of light. Lateral chromatic aberration is change of focal length with respect to wavelength of light and angle of incidence. For rays coming in parallel with optical axis / principal ray - there is no change in focal length. Further away from center you are - more is focal length decreased or increased for respective wavelengths. You are right - if you use stacking software that is capable of scaling subs in order to align them (there are different operations that can be used to align subs - translation, rotation, scaling, ... and not all can be implemented in alignment algorithm, so scaling might be missing as usually focal length does not change), and given following conditions: 1. focal length depends linearly on angle / distance from the center 2. sensor response to uniform across wavelengths you should see no lateral chromatic aberration as color separation and only slight elongation of stars should be visible (and if you use narrow band images - there should be no elongation). Problem is that these two conditions are never met in reality. If first is not met - then there won't be single "scale" that one needs to use to enlarge or shrink particular channel. Channel will be sort of warped and if you align by some stars - others will still show lateral chromatic aberration. Second one just means that stars won't have "peaks" in the same place because camera has different sensitivity in different wavelengths. Once you align peaks of stars - "skirts" will go either way and if red skirt goes one way and blue goes the other - you will still see chromatic aberration.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.