Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. First of all - I think your processing skills are extraordinary. I've never worked with data that behaves like this and with standard tools - I can't get anywhere near what you got. As if all the sliders run out of space - I want to sharpen more but there is only so much a slider will go to the right My recommendation would be to restack the data, but this time using less frames from each recording. Here is what I believe is happening. Number of stacked subs is a tradeoff. Stack too few subs and you will have noisy result. Stack many subs and you will have good SNR Stack too few subs and you'll be able to sharpen result properly (as these subs are already not too blurred - high quality), stack too many results and no amount of normal sharpening will get you there. This means that there is "tipping point" - once you stack more subs than this point, you enter a "race" - add more (low quality) subs - can sharpen more because of better SNR but also need to sharpen even more because you added blurred subs. And if you try to sharpen enough you will certainly hit the noise. I think your data is past this tipping point and this is the reason for the noise - if you want it to look as sharp as in your image - you need crazy amount of sharpening and SNR is simply not up for it. Try using less but more quality subs. AS!3 has 4 fields where you can enter percentage - try to make each movie with range of top percentages used - like 10, 20, 30, 40 and see what gives you best results. Also - try doing what I've suggested - joining video in one and performing derotation (if you have significant time gap between any two runs - derotate to same time each movie individually before joining, else join and derotate) and then stack. This will select good frames across the whole run - not only in each segment (it is a bit like taking a top athlete from each country and declaring that these are best athletes in the world in a given sport - it might not be true, one country might have 3 such athletes while some countries would have athletes that would otherwise not qualify - but are still the best in their country).
  2. Very good point - never considered it, but yes - there is power consumption indicator in software (cooler running at XX percent) and I bet it will be much less if temperature is not set as low as possible.
  3. I really don't remember. It was nothing special - just some decent (but rather no name) lithium grease. Mechanical parts in mount don't really experience any sort of extreme - no very high temperatures, no high pressures, no high speeds. Maybe worst extreme is sub zero temperatures in winter - but that is probably no concern for you given your location. That means getting just decent lubricant that will mostly stay in place (no oils or runny stuff). Sure, why not. You might even decide not to sell it. At some point I made similar decision and purchased ASI178mmc. It is versatile little thing - can do planetary, can do guiding, although it is very bulky for guiding as it has cooling. I did some long exposure imaging with it (but on much shorter focal lengths of 380mm), and it is now in wide field role (well actually waiting for me to be in the mood to go actively out again and do some imaging). I paired it with Samyang 85mm F/1.4 lens on AZGti mount that was converted to EQ (just added wedge, CW bar and firmware with EQ mode). I think it will be cracking combination for wide field.
  4. 1) Offset should be raised - it serves a purpose. Best way to get good offset is to do following: - set offset to some value - take dozen or so bias subs - stack them using minimum function (instead of average) and find minimum pixel in resulting stack. - If minimum pixel is larger than 0 (or other minimal reported value for that camera), then you are done, else increase offset a bit and return to second point in this list I like using unity gain and advise it, but as far as I know color model of that camera had some non linearity issues and people often used higher gain value. Not sure if that is also true for mono model (as far as I know they are slightly different sensors in mono and color version so mono might not have this issue). Do use unity gain but maybe first do a search if nonlinearity issue is only for color model or for mono as well. If you run into issues with flats - it could be because of this - then raise gain to 200 and something (do search on topic - you'll find recommendation), or we could ask @Adam J? 2) No it's not correct. Binning is as important on CMOS as on CCD - it is only done differently and it slightly differs in result. If you need to bin - then bin, and if you don't need to bin - then don't. Benefit of software binning is that you can decide to bin after you do recording - you don't have to decide in advance like with CCD (in order to have that small benefit that CCD binning has over CMOS one). 3) Set temperature as low as you can manage given your ambient conditions. Cameras have deltaT - how much they can cool below ambient. If for example deltaT is 35C and in the evening temperature drops below 15C (as does in winter) - then feel free to cool to -20C, camera will be able to do it. It is not unusual to have different target temperatures in winter and in summer because ambient conditions change. I use -20C in winter and -15C in summer, and I even had issues with -15C on some summer nights. If this happens - then go with temperature that you can achieve on a given night - you can always shoot matching set of darks afterwards to calibrate your data (that is the point of camera cooling really - to be able to recreate target temperature for dark calibration). 4) You will see it in your image. If you happen to have issues with corner stars - like stars being elongated or out of focus in one or two corners while others are fine - then you might have a tilt. Tilt can be due to camera or due to focuser. If you rotate camera and shoot same target and elongated stars also rotate in FOV - then it is sensor / camera related tilt, but if star issues remain in the same place with respect to actual target in the image - then you have focuser tilt. In any case - don't worry about it until you need to worry about it - and you will know if it comes to that. 5) That really depends. It is best to shoot L when seeing is good as it carries the most of detail information / sharpness. Color part does not carry nearly as much detail (and people often bin color to capture it faster, because of this). However - you never know if first or second night will have better seeing. Similarly - you want to do L and B on night of good transparency - because again L will bring in detail and SNR (most noise also comes from luminance so you want good luminance data). Blue is often weakest of the colors - both because attenuation / scatter is greatest in those wavelengths (that is why sky is blue) but also sensor sensitivity is lowest in blue (with few camera exceptions that have high QE in blue part of spectrum). If you shoot part of subs on one evening and part of them on other - you run a risk of having subs with different SNR in your stack. That is not ideal, and if anything try using stacking algorithm that assigns weights to each sub to compensate at least partially for this.
  5. I think that they had a range of camera models with "OAG" type sensor integrated. For example this one: http://www.company7.com/library/sbig/pdffiles/cat_9xe.pdf
  6. Well, according to this: https://theskylive.com/pluto-info#brightness if one is going to look at Pluto - better do it soon, brightness is going to go down for next hundred or so years Btw - correction, current value is 14.44 but it will brighten to 14.34 in July 2023
  7. According to almighty google - it is mag 15.1 - so rather faint. I think that mag 15 isn't accessible with apertures less than 12"
  8. That is another way to add 1.25" nosepiece, but in case of finder scope - that won't do as you need short adaptation. Cameras are usually attached to finder scope with this kind of adapter: https://www.firstlightoptics.com/adapters/astro-essentials-sky-watcher-9x50-finder-to-t-adapter.html Or perhaps something like this: https://www.firstlightoptics.com/adapters/astro-essentials-sky-watcher-9x50-finder-to-c-adapter.html depending if camera has T2 thread or C thread connection. But in your case - it has neither and you need to fashion something. I'd look into one of following two ways - if you have access to 3d printer - then you are pretty much sorted - just print suitable adapter depending on your guide scope. If not - try to find plastic cover that will fit on finder tube nicely (similar to finder scope cover on front side). Then you can cut hole in center and glue it to web camera instead of film canister. Before you glue it - just check if you can reach focus at infinity with it (finders can adjust focus with moving lens on its thread so you can do fine focus - just make sure you are in ballpark to be able to focus properly). I'm not familiar with that particular Celestron motor upgrade - but I do know that Skywatcher has two different motor upgrades for its EQ3/EQ5 mounts - one with ST4 port and one without. If there is port on your hand controller marked with AutoGuider - or ST4 - using RJ11 connector (analog phone connector - similar to RJ45 ethernet connector but using only 4 wires) then you can use ST4 protocol. We now just need to figure out how to use ST4 protocol directly from laptop (guide cameras have ST4 port that you can use via their USB driver - computer connects to camera via USB and camera connects to mount via ST4). Well, there is adapter: https://telescopes.net/zwo-usb-st4-adapter.html And maybe there is DIY solution as well for it. But first check if your hand controller has ST4 port and can be guided. If not - not all is lost, but it will depend on how much DIY you want to do. https://github.com/TCWORLD/AstroEQ This is a way to make your own Arduino controller that will talk to stepper motors you already have (hopefully they are stepper motors). It will be able to connect to computer and then you can use all sorts of advanced features like plate solving, planetarium software to point your telescope to wanted target and so on... Alternative is of course - mount upgrade.
  9. Only way is to let software do it for you. On most panels stars are properly aligned - except in few places. In order to see if image can be properly aligned - maybe try using Microsoft ICE for making a mosaic. It is much more advanced for that sort of thing as it is made for camera lens and panoramic images. Lens have much more geometric distortion than simple system like telescope (which should not have any distortion except for standard sphere to flat surface mapping issue). Only problem is that Microsoft ICE works with 16 bit data and that you probably need to feed it stretched images which poses a problem of its own. Maybe best way to do it would be to perform stacking and then identical non linear transform to each stack. Something like background removal followed by gamma transform (with gamma being in range of 2-3, or even more if it provides better result) and conversion to 16bit.
  10. These are shadows from dust particles - but in order to show on sensor - dust must be fairly close to sensor - otherwise "dust doughnut" will be too large and "out of focus" to show. You can calculate approximate distance from sensor. Try one of these: https://astronomy.tools/calculators/dust_reflection_calculator https://www.wilmslowastro.com/software/formulae.htm#Dust My guess that they are on sensor cover window glass of camera unless you use some sort of filters / filter wheel - in which case, they are on filters.
  11. Polar alignment is procedure to align RA axis of the mount to rotation axis of the earth (both will then point near Polaris - tiny bit to the side to actual NCP - North celestial pole). After it is completed, mount should track properly if it is perfect. However, no mount is perfect, and for that matter - pointing it near Polaris is not done perfectly either. Small deviation in polar alignment causes small drift in DEC over time. Since mounts are not perfect - in sense that mechanical components are not perfect circles as there are manufacturing tolerances and error - we get another kind of tracking error - mount periodic error. Even if you roughly polar align (within few arc minutes of NCP) - that error will be usually smaller than mount periodic error - which can be significant. Both of these cause star trailing because both errors cause stars to drift from their original position over time. Polar alignment drift is in DEC axis, while periodic error is in RA axis and as name says - is close to periodic in nature. Here is recording that shows both nicely: In above gif - mount is actually tracking and motion that you see is solely due to error in tracking. Slow drift from right to left is due to small polar alignment error - it is very uniform and not large in magnitude (each individual frame of that animated gif is one minute long exposure). Jumping up and down is periodic error - you can see why we call it periodic - it sort of repeats time and again - almost the same each time (but not quite). This is because there are different gears in a mount and each of them is somehow elongated and sometimes trails and sometimes leads. When all those little errors combine - we get this kind of wavy motion. Back to the guiding - guiding is solution to above problem. With guiding - there is separate camera on separate scope (or OAG - off axis guider - special device with a little mirror that lets you attach second camera to main scope) that takes short exposure every few seconds and compares star positions to previously recorded stars. If it detects any sort of drift between two exposures (even sub pixel accuracy) - it will instruct mount to correct its position to compensate. In order to do that you'd need: 1. guide scope (or OAG, but guide scope is simpler to begin with) 2. guide camera 3. computer that issues corrections to your mount. All of that can be pricey, but it can also be made really cheap - depending on how you want to approach it and can you do a bit of DIY. Probably cheapest option is to a) use your finder scope as guide scope with simple adapter (unscrew finder eyepiece and put adapter for camera in its place) b) use simple web camera as guide camera. There are several models that are suitable for this and it involves removing front lens of web camera and fashioning some sort of adapter to attach to guide scope. I once made such camera from Logitech C270 - although I did not use it as a guide camera - but rather planetary camera (guide camera and planetary camera are very similar in their specs). I used 32mm OD PVC pipe sanded down as nose piece c) This tends to be most expensive part if you don't already own some sort of laptop - but old laptop, tablet or even single board computer liker raspberry pi can be used for this. In any case - if you look at images of peoples imaging rigs - you will often notice small scope with camera attached - like this: (this is very nice annotated image from Astrobackyard website). You of course don't need all the gadgets shown in the image - however, note guide scope and guide camera being attached. Youtube contains a lot of videos showing how to guide and what sort of setups people use, so it's worth exploring a bit as well.
  12. Hi and welcome to SGL. That is interesting looking scope. I'm surprised that it is F/7 triplet at 80mm. Are you sure of those specs? If yes, then don't change the scope unless something is seriously wrong with it. It can serve you in wide field role for many years. If it is doublet - then it will be fine for some time until you get some experience, but at some point you'll likely want to upgrade. First and most important thing for AP setup is mount. That should be your number 1 priority for upgrade. Ideally you'd want something like Heq5 (second hand if budget is tight) - or at least EQ5 with goto. Get the best mount that you can for your budget. All the rest can be very cheap and you can use what you have. Next upgrade would be auto guiding. You'll need some sort of guide camera and guide scope. This can be rather inexpensive with a bit of DIY. People use modified web cameras (just removed front lens to expose sensor), mounted on finder scope instead of its eyepiece. There are adapters available to attach camera (via T2 thread) - or if you have access to 3d printer - that can solve a lot of things rather cheap (for example - you can make your own guide scope from a lens from old binoculars, some PVC pipe and few 3d printed parts). So yes, that is order of upgrade: first and most important - mount second is to think about guiding that mount. Rest is fine for the time being.
  13. Could be down to processing as well. Maybe try to limit how much you push the data - find that place where noise starts to show too much and then back off a bit? You could also post raw stack so people can process it with their workflow and confirm if data is really noisy or maybe it is just processing.
  14. That is quite a lot, not sure where the noise is coming from then. What camera and settings?
  15. I would not call 180mm smallish scope Out of those 6000 in each run, how many did actually end up in stack? Was it always the same number? If not - try doing whole process again but this time making sure each of those 6 stacks has same number of subs in them (say 10% or however you see fit). If you use different number of subs in each stack - then each stack will have different SNR and just regularly stacking images with different SNR produces sub optimal results. Best way to explain this without getting into complex math is to observe what happens with diagonal of square vs rectangle. With square diagonal is noticeably longer than either of sides (since sides are equal we can say just one side): But as you make more difference between two sides - diagonal tends to bring less "benefit" - or rather won't be as long compared to longer side as with square case: In above image I added circle so you can see how small difference is between diagonal and longer side. Alternatively - join AVI into one long movie (using PIPP) and then stack that (or maybe first derotate it in WinJupos if recording is longer than say ~5 minutes between start of first movie and end of last one). This is better method as stacking software will then pick subs according to their quality and you might end up with 90% of subs from one movie if they are better than all the rest.
  16. Much more realistic scenario is very low read noise camera. With long exposure and "real time readout" - you would get actual star in first readout, superposition of it and "next" one - in second read out, stack of 3 exposures in third readout and so on - as signal accumulates. Much better idea is to have very low read noise camera and simply do following: Say you happen to have 0.1e read noise camera and you read whole sensor every 1 second. In regular exposure of say 256 seconds (which is about 4 minutes) - total "read noise" if each 1s sub was simply added without alignment would be x16 of one sub - so 0.1e x 16 = 1.6e of "read noise" - comparable to very good current cameras. However, one could easily guide on such short exposures and even implement FWHM rejection filter for those short subs - effectively recording one "regular" sub when say certain number - again say 256 of these short subs is added up. No additional memory would be needed (except for holding current frame) in camera - every readout would be inspected for star position and also added to "accumulator" image - after regular readout accumulator image would be reset to zero and process started again.
  17. Not really guiding - it is just correction after exposure has been taken. Mount tracks during exposure on its own - no corrections are performed when it counts. In principle any software should be able to do it - regardless of ROI or full frame download.
  18. Indeed, and I think it is a shame it is over sampled. With such short exposures, there is not much signal in the image and every little bit of SNR helps to establish if sub is viable and should be used in stack and to help with stars used for alignment when stacking.
  19. Quick inspection suggests that he is about x2.1 over sampled there. Here is frequency response of that image: As you can see - all signal is concentrated in central part, rest is just noise. Inspecting "edge" of frequency signal gives ~4.3 px per cycle (instead of x2 needed for proper sampling). However, most of exposures are in 0.5s range, and I wonder how many are discarded?
  20. Ok, so here are a few guidelines then. You mentioned that you are a technician, so I won't shy away from technical stuff then. I'll also be very brief and to the point as there is a lot to cover. Regardless the fact you'll be using short exposures - anything expressed in seconds rather than milliseconds is long exposure as far as seeing is concerned. For long exposure, here is breakdown of what you can expect in terms of resolving the target. There are three main components to the blur affecting the image. 1. Seeing - expressed in FWHM in arc seconds, and represents full width at half maximum of Gaussian profile approximation to seeing blur. It is measured with very large aperture on very good mount in course of 2 seconds for exposure (see - as soon as we step into seconds - seeing averages out). 2. Mount tracking / guiding performance. If you don't guide - this is nothing more than a guess. I'll give you some guides on what you can expect from HEQ5 type mount later on. If you guide - then you have measure of how good your tracking is and is expressed in RMS error in arc seconds. Two are related by simple equation FWHM = 2.355 * RMS (for Gaussian profile). We always use Gaussian profile for approximation as it is fairly accurate approximation (central theorem) and easy to work with. 3. Aperture size. Here we approximate Airy disk with Gaussian profile. It holds true for perfect aperture but in reality, especially when using correctors (which correct over whole field but deteriorate on axis performance) this blur is somewhat bigger. When available spot diagram RMS is good alternative (and often more precise if one is using correctors or reducers). Once we have all three values (measured or estimated) - total blur is square root of sum of their squares. This lets us get estimate for expected FWHM of stars in our image - which is in turn tightly related to sampling rate (as stars are point sources and their profile is directly representing PSF of blur). Very simple relationship that you can use (but math behind it is not so simple) - is sampling_rate = FWHM / 1.6 Good seeing is 1" or below - that does not happen often. You have several websites that offer seeing forecast - and it is usually fairly accurate if you make sure you don't have any local influences (that can be very detrimental) - like properly cooled optics, no seeing disturbances around - like hot roads or large bodies of water or houses with heating and chimneys and so on. More often seeing will be around 1.5"-2.0". That could be taken as average. As far as mount performance goes - unguided performance usually depends on two things a) periodic error b) poor polar alignment People always seem to blame poor polar alignment for star trailing - but in my view and experience - periodic error is much more responsible for star streaks and mount poor performance. With either of the two you must estimate drift rate and limit your unguided exposure depending on wanted resolution (one you are aiming at). For HEQ5 you can safely estimate that periodic error is about 35"-40" peak to peak. In fact, I once did recording of unguided performance of my HEQ5 and here is what it looks like: Up down motion is periodic error (and you can clearly see how it periodically repeats - hence the name) - right to left drift is due to polar alignment. You can clearly see that PE is much larger in magnitude over shorter periods of time than PA error. HEQ5 mount has period of 638s and if you have say 35" P2P periodic error - that means that mount will need to trail / lead - or drift in general for 70" (there and back again by Bilbo Baggins ). If drift is uniform (and it never is as seen from recording) then you would have drift of 70" / 638s = ~0.11"/s This is important number - as drift due to periodic error will sometimes be more than this and sometimes less than this. If you take this to be reference then in 30s exposure - you will have ~3" of trailing on average. Half of frames will be less than this but half will be more. You will probably discard worse than this (or even at 3" trailing). In fact in above recording you can clearly see how in some subs stars get elongated while in other are round. I think I used 1 minute exposures there on my HEQ5 (1200mm FL and 3.75um pixel size). This is so that you can understand that there is percentage of subs that you will have to throw away if you don't guide and that percentage will depend on your tolerance for elongated stars and exposure length that you'll be using. Back to resolution. We have seeing that is around 2", when you start guiding - you can expect stock HEQ5 to guide at about 1 RMS, and we have 8" of aperture. If you use simple coma corrector - you'll get spherical aberration on axis and star bloat, but for sake of simplicity lets go with diffraction limited scope. In those conditions - your final SNR will be ~3.14" FWHM or that will be about supporting 1.96"/px resolution. That is about 9.7um pixel size (so you know how much you'll have to bin based on initial pixel size - at least x3 if using 2.9um). Further - most galaxies are rather small in size. Someone mentioned trying M82 - which is about 11' long or 660". With 2"/px - that is only 330px. I'm just letting you know what you can expect. And that is with guiding (stock mount). Just tracked - you probably won't achieve 2"/px resolution due to additional blur. I'm not saying this to put you off - but rather to prepare you. If budget is tight - you might consider using simple web camera for guiding and modifying your finder scope for that role. Any sort of guiding will be better than no guiding at all. On the other side of spectrum - when you tune and mod your HEQ5 - best you can hope to achieve is around 0.5" RMS guide error. I once managed to go as low as 0.36" RMS and have a screen shot to prove it But list of modifications that I did to my mount is: 1. All bearings replaced for SKF high quality ones 2. Mount cleaned, regreased, tuned 3. Periodic error correction recorded and applied (in EQMOD) 4. Saddle replaced for Geoptik dual Vixen/Losmandy variant 5. Rowan belt mod 6. Berlebach planet tripod If we have diffraction limited 8" scope, 0.5" RMS guiding and happen to image in 1.5" FWHM seeing - we can hope to achieve about 2" FWHM or about 1.24"/px. Even at this resolution - most galaxies will be just few hundred pixels across, and that is about as good as you can get (maybe down to 1"/px - in ideal conditions and with better mount and larger aperture). In the end - I want to explain one more thing - when I say that you should aim for say 2"/px because that what your setup / sky can support - that does not mean that you can't image at 1"/px or even 0.5"/px. Sure you can, but two things will happen: 1. you will record image that is very devoid of detail when viewed at 100% zoom - since you are over sampling 2. As soon as you start over sampling - you are starting to have slower system than if sampling properly. Light is spread over more area and each pixel receives less photons / less signal. Less signal means less SNR, and astrophotography is all about SNR. If you want large galaxy image (devoid of detail) - then I advise you to sample it properly to get best SNR and them simply enlarge image in software - result will be the same as far as detail goes - neither can pull detail out of thin air.
  21. Probably exposure length It needs to be very short - maybe even same as capture exposure - that would be around 5ms. Planet will be dim at that sort of exposure even if you up the gain - so don't try to make the planet brighter by using longer exposure - that will just give enough time to seeing to blur the view. You want planet to jump around from seeing but not to blur from it - if that makes sense.
  22. Depends on two factors. First is size of blocking filter, and second is obviously size of the field stop of eyepiece. The way you are now using your system, and that is 560mm with x4.3 amplification - you get 2408mm of FL. That makes solar disk 21mm in size. that is quite big. You need eyepiece with at least 23-24mm field stop - so that would mean plossl of 25 to 32mm of focal length to view the whole disk. That is not limiting factor. What is limiting factor is size of blocking filter. It might be counter intuitive how 12mm blocking filter can produce full solar disk - but remember, we are in telecentric beam and blocking filter is away from focal plane. This is what happens: right is projected solar disk and left is blocking filter (image is not to scale, but rather a diagram). Central beam will pass as is. Since it is operating at F/30 - if blocking filter is 12mm - it can be 30 x 12mm = 360mm away from focal plane and central beam will still pass at 100% (that is 36cm - I'm sure that blocking filter is much much closer to focal plane - maybe 7-8cm away). However - edge beams won't pass fully and will be clipped - so image towards the edge of the FOV will darken. We don't feel things linearly - so even if the light is cut down to only 50% of original - we won't see it as being half the brightness (in fact we don't even notice first ~7% of the drop of intensity). Note light rays in above diagram - the are not how usually light rays are drawn, but are in fact consequence of telecentric lens. It turns diverging rays into parallel ones, unlike barlow that spreads them. This of course means that for same image size - barlow would vignette less, but etalon likes perpendicular rays - that is why telecentric lens is used (and is better for this application). Back to the question - as is with x4.3 telecentric lens - even 21mm blocking filter would vignette somewhat - but it would be much less noticeable. Only way to really get vignette free image is to reduce solar disk size, and only way to do that is to reduce focal length.
  23. Depends on how you use it. What combo model offers is flexibility. Etalon needs collimated beam for optimum performance. Both of quarks don't use collimated beam (like front mounted etalons or dedicated solar scopes with special collimation elements) but both use "next best thing" - telecentric lens. Or rather - regular quark has x4.3 telecentric lens integrated - but for combo - you need to provide your own way of making F/ratio of the telescope slower. This can be done in one of two ways (or even combination). First is by use of telecentric lens (or barlow, but telecentric performs much better for this application due to light path/angles involved) or second - by use of aperture mask. You can even combine the two to get results somewhere in between. Why is this interesting? Well, because you can utilize range of focal lengths and hence magnifications of the sun disk in the focal plane. If we have fixed size blocking filter, then we can change how much of it is visible in FOV with respect to solar disk - by changing focal length. With your 80mm F/7 scope - you have several options. That scope is 560mm FL scope, so you can choose to use it as is with say 28mm of aperture mask to give you F/20 system, With 560mm FL scope, full solar disk will be about 4.9mm in focal plane. That is much smaller than blocking filter and you'll be able to view full solar disk. Only drawback is that you'll be limited to about x50 as far as resolved detail (small aperture). You can then choose to use x2 telecentric lens to increase your fl to 1120mm. If you want to keep F/20 beam - you'll be using about 56mm of aperture. This will also increase full solar disk to ~9.8mm in diameter. With 12mm blocking filter - there will be much less room around the disk (with larger diameter in combo there will still be plenty of room ...). Then there is option to use x3 telecentric lens and any sort of aperture mask if one desires or leave the system at F/21. However, solar disk will be about 15mm in this case. So yes, you can set it up to have more "room" around the disk - even if combo has the same blocking filter of 12mm - but as far as I gathered it has about 21mm (do keep in mind that blocking filter is not at focal plane and its edges won't be in focus but rather it will create some sort of vignetting, so real usable field will depend on distance of blocking filter to focal plane and will be less than size/aperture of blocking filter).
  24. Oh I see. I was under impression that we are talking about regular deep exposure imaging. In any case, recommendation still stands - even when using short exposure, all above apply unless approach is lucky type imaging where exposures are in range of milliseconds rather than seconds. Second to two is enough for seeing to average out at whatever FWHM it is going to have so there won't be any real benefit that lucky imaging provides (with discarding poor frames) - but yes, if tracking is an issue and that wants to be handled with short exposures then read noise is important factor - but you are absolutely right there - one should be looking at read noise per unit pixel area as read noise grows when binning (kind of binning we are discussing here - software).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.