Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. No (not yet, expecting to get one any time now :D), but I do have unused B-mask that I can use by covering angled grating part with something and 4" F/10 achromat :D.
  2. Well, all refractors have this issue - question is - how much of it do each of them has If we could measure it without expensive lab equipment like monochromators and such - I guess it would at least be interesting / educational. I think that proper shape for measurement is simply half grating looking like this: (just half of aperture covered by grating and other half clear). This animation shows why spike moves from one side to other as focus is reached and passed: Further from focus image resembles more that of aperture itself and as it focuses - it comes to a point. Similarly - if grating is just to one side - it will switch sides in out of focus image - that means it should pass thru a point in focus - much like horizontal grating does in above animation (starts on one side and then moves to other as it defocuses).
  3. I did not realized it until I found the images on the link I provided. Now I'm thinking that it can be somehow used to visualize / measure color correction of refractor telescopes?
  4. Just to expand on above - correct answer is: use 0.5-0.5 for background parts of the image and 0.6 - 0.4 weights for target (or each part of image should have its own weights assigned to subs that will provide close to optimum solution - that is what I mentioned above under "per pixel" weighting).
  5. Do you know based on what weights are decided? There is fairly simple way of determining proper weight for simple average of values - provided that you have both value and noise of every sample - but a) we don't have that in an image b) there is no single weight that will be suitable for whole image Even if we had exact noise and signal levels - one set of weights is optimal for only one set of signal/noise pairs. Imagine following scenario - one images at the edge of the town in direction away from LP, but target is lower in altitude, then as mount tracks - target gets higher but also towards higher LP. After frame normalization - you can end up with case where background noise levels are equal but noise levels associated with target signal are different (or vice verse). Which weights are you going to use? 0.5 : 0.5 - which suits background noise levels, or say 0.6 : 0.4 that suits noise associated with target?
  6. I'm probably not the best person to ask that question as I can't put myself in purely user perspective. I look at it from algorithmic perspective - what sort of data transform I'm happy with. I can say what I think is best thing software should do with data - but none of currently available software does it all. - There should be option for different binning methods at calibration phase including split binning. Debayering should be also available in split variant (I think there is PI script for this, and perhaps Siril?). - I think that software should use sophisticated resampling methods like higher order Lanczos resampling (free option that does this is Siril) - I think that software should do mosaic stitching out of the box (APP does this?) - I think that subs should be linearly normalized (PI does this) with LP gradient compensation (no software does this) - I think there should be per pixel weighting instead of per sub weighting for SNR optimization (no software does this). I would also like to see some advanced features like FWHM matching with use of deconvolution and blurring) and things like that. If you are asking what is my current workflow when stacking images? I use ImageJ to calibrate data myself (including any split debayering or binning needed). Then I use Siril to register data. I use ImageJ if I need to stitch mosaics. I use ImageJ plugins that I wrote for sub normalization and plugin that I wrote for stacking (both of which handle those special cases I like). I would certainly be happy to have one software package where I would not need to manually take care of these steps. I might end up writing one myself for that purpose.
  7. Depends on how you do it. Here is easiest way to do it without loosing color information. Record, calibrate, stack your image as you normally would - to get 32bit per channel RGB image. While still linear prior to any processing - you bin your data x3 and then proceed to process as you normally would. This will have almost the same effect as using larger pixels (not quite as you are using color camera and there is bayer matrix and actual resolution of your data is not what is commonly believed - not the same as equivalent mono camera, so improvement from binning won't be the same - but it is best way to do it with color data in your case).
  8. You can always control over sampling with binning and field of view with mosaics. That is the cheapest option - you don't have to get any new hardware to do it. You'll probably want to aim at x3 binning in most conditions. That will produce 1000x1000 image per mosaic tile (or single image if you don't want to do mosaics) and about 1.8"/px.
  9. @ollypenrice Maybe we could divide whole workflow into couple separate steps and then define "standard" and "non standard" ways to do it? For example: 1. Pre processing - this is calibration and stacking. Not much to choose here from - there are only couple of ways one can stack and in reality - one algorithm can cover 99% of cases as being best option 2. Processing - this is again something that can be very "standardized" - as it involves set of steps that are always performed the same: - cropping if there is a need for it (I would say cropping should be part of stacking process) - background wipe - color calibration - initial stretch (which will be very gentle to allow people to see what is there in the image) to name few. 3. Post processing. I'd say that this step is optional. Much like daytime photography, one can choose to have image as is from camera after standard processing or decide to enhance it further for wanted effect. This is something that can't and should not be standardized - and I think that software like PS or Gimp is best tool for this (as it is done for that purpose).
  10. I don't think there is one that will work well with 1" sensor. At that sensor size - you want dedicated / optimized x0.5. Regular ones that you can find won't work well. For example - https://www.firstlightoptics.com/reducersflatteners/astro-essentials-05x-2-focal-reducer.html will work only for very small sensors. If think about it - 533 sensor has 16mm diagonal. Using x0.5 reducer will compress 32mm field into 16mm diagonal. 32mm field of SCT is going to be very curved and coma will be evident. You need reducer that will also correct for these aberrations as well. Maybe you could try this one and see how it works: https://www.firstlightoptics.com/reducersflatteners/stellamira-2-06x-reducer-field-flattener-with-m42-adapter.html but that is meant for F/5-F/7 refractors. Not sure how it will work on SCT.
  11. https://www.teleskop-express.de/shop/product_info.php/info/p11425_Starizona-Night-Owl-2--0-4x-Focal-Reducer---Corrector-for-SC-Telescopes.html (or from somewhere where you don't need to wait 1/3 of a year for delivery )
  12. Very nice images. Out of interest, when you say you coupled LS50THa etalon with 90mm scope, how did you do it? Is it F/7 scope and you shortened the tube and fashioned adapter for etalon to be screwed in (soft of PST mod with Lunt internal etalon)?
  13. It is useful - but it's not the whole story. I've been able to spot MW at zenith in Bortle 7/8 location (edge of 7 and 8 according to 2015 data - probably Bortle 8 now). I've also detected M31 with averted vision. This only happens on very transparent nights. Transparency is big part of equation as it impacts brightness of the target as well as actual LP levels (how much ground LP scatters in the air). Key is dark adaption and shielding from direct LP sources like street and house lights. Best way to see it is to pick transparent night, and just sit relaxed in the chair shielded from LP sources. Spend about half an hour - 45 minutes just relaxing and looking at the stars to get well dark adapted. It also helps if MW is overhead in that moment - so aim for it to be close to zenith.
  14. It looks like this might be effect of color correction of objective lens - imaged with full spectrum mono camera. https://www.cloudynights.com/topic/665210-question-about-using-a-bahtinov-mask/ Bahtinov mask acts like very low resolution diffraction grating and spikes are in fact low resolution spectra. In above image multiple orders can be seen in each spike. Position of each spike will depend on focus, and depending on color correction of telescope - focus can shift with wavelength creating little curls or at least waves. You are not seeing that colors are affected as you are using mono camera.
  15. What does star image look like without bahtinov mask?
  16. I tried but I did not like what I saw - complete loss of individuality
  17. It really depends what you are trying to achieve. It is important to understand why is luminance useful in the first place. Here is demonstration of that. Let's take nice image and see what luminance and color information really represent - for example this image: When we decompose image into components - luminance only and chrominance only: You can see how much information is carried by luminance and how much by chrominance components. Lum carries much more information for our visual system. This is very important for our perception of noise. Look at these two images for example - first one is our baseline image with noise added to luminance channel, and second is base line image with same amount of noise added to chrominance channel (we could even argue that we added more noise in second case as we applied noise to two channels of chrominance - not only one): Difference is obvious. It is therefore beneficial to have higher SNR to luminance and spend more time to capture luminance information that chrominance component (out of total imaging budget). It is also important that we process data in proper way for us to actually benefit - we really need to treat luminance as luminance information (in appropriate color space) What about NB imaging then? There are two approaches here. First - we must understand that by doing nothing and just using pure SHO palette - we are already doing a lot to mimic L benefit. How so? It turns out that luminance is not composed out of RGB components equally - red, green and blue as defined in sRGB standard and used in our display devices don't carry same amount of luminance. If we take pure sRGB primaries - like in image above and we convert that to grayscale image (just luminance) - we get this: Green carries the most luminance information, followed by red and blue in the end (red and blue are close - but dimmer than green). Ha signal is often the strongest out of usually imaged triplet - and thus has best SNR if we dedicate equal amounts of time. In normal SHO palette (not modified like we often see in images for aesthetic effect where people "kill off" green for some reason) - Ha maps to component that carries the most luminance information. This is good thing as it makes image look less noisy visually - just by choice of mapping. This of course works only if one does not kill off green or change palette substantially from original SHO (most of what people say is SHO is not SHO at all). Back to luminance for NB imaging and two approaches: 1. approach is to use Ha as luminance information. This works well because again - Ha is the strongest of the three and usually is present all over the place. One just must take care if there are areas with strong OIII that does not contain much Ha signal - as in that case OIII will be dimmer than it should 2. approach is to use quad band NB filter. This works with mono cameras basically the same as LRGB for broad band imaging. One should spend most time with quad band filter and then take small amount of separate NB data just for "coloring" of the image (same as with LRGB really). In either case (any time we use luminance) - we need to make sure we apply it properly to the image - otherwise we are not going to get optimal results.
  18. For purely imaging - I guess the cheapest option is actually DIY - spectroheliograph. Look here for example (and build and operation details): http://www.astrosurf.com/solex/sol-ex-presentation-en.html For "no fuss" visual and photo - take a look at Lunt 40mm. This has front mounted etalon (no sweet spots), very decent F/ratio - F/10 (which is good for ~3.3um pixel size). Very affordable Ha scope. For best flexibility - Quark combo, 4" F/10 achromat, x2 and x3 telecentric and aperture masks to tune F/ratio of system for different focal lengths. This setup will enable you all - whole disk, medium and high magnifications.
  19. Original drizzle algorithm was developed for HST and it works in very specific circumstances. It requires under sampling to be present - that is not something that usually happens on amateur setups these days - pixels are getting smaller and seeing is the same as ever . It also requires pretty specific dithering - precise to a fraction of a pixel. Possible in space but not possible on tracing mount (tracking mount error is often larger than dither needed - for example x3 drizzle requires pointing precision to 1/3 of a pixel - we often recommend rule of thumb that RMS guide error should be about 1/2 of imaging pixel size). Only place where drizzling approach works in amateur setups is Bayer drizzle (to an extent - it works the best in planetary / lucky type imaging). It is used to recover resolution of OSC sensor defined by pixel size (otherwise OSC sensors sample at twice lower sampling rate - as if pixels are x2 as large).
  20. I don't actually. Here is your regular version x3 upscaled and a bit further processed: To my eyes - it contains more detail than drizzled version. It goes deeper as well (look at Ha edge). For reference here is Hubble version scaled and rotated similarly to above image: Features present in regular version and brought forward with a bit more of processing are genuine as can be seen when comparing to reference version.
  21. @Shimrod Maybe it seems that bringing economics to this discussion is going off topic - but I don't think so. It serves a purpose to better understand original question - will prices of astro gear continue to go up or will they stop? First thing to understand is that they are in fact not going up. They still cost the same - it is value of everyone's work denominated in euro, pound and dollar that is decreasing. You are getting payed less for your work thanks to monetary policy (good or bad - I personally think it is incompetent). It might seem as if prices are going up - but what is really happening is that value of your work is going down so that you can get more competitive against low cost labor of China. That was the point of quantitative easing. This is exchange rate. Say that something costed 1000 CNY back in May 2020 - you needed to spend 1000 / 8.3 = 120 euro on that Imagine for a moment that now in May of 2022 this thing still costs 1000 CNY in china to produce. How much will you have to pay for it? 1000 / 7.1 = 140 euro That is almost 20% increase in price - but in reality, for someone in china - price did not change. Euro is now worth less (as there is much more of it in the economy - but actual amount goods and services did not increase at same rate) and if you receive your salary in euros and it stayed fixed - you are now working for less value - your labor is valued less (to be more competitive on global market). Question is - how much more devalued work needs to be to settle things down. We have seen that amount of money increased by 100% in last couple of years (almost doubled) - but I don't think that economic growth was more than few percent. There is still "room" for prices to rise even if we don't hit inflationary spiral. We hear a lot of possibility of stagflation hitting and I think it is very plausible that it will happen. Before it all ends, in my view, EQ6R mount could easily hit $3000+ mark and more.
  22. Well, could be the case, but how do you explain the fact that we already had these oil prices in the past and we did not have massive inflation at that time? On the other hand - look at ECB balance sheet over time:
  23. I see that many people blame gas and oil price increase for inflation (seems to also be predominant narrative in the media), but I think that is not the case. Both EU and US were printing money like crazy in last 6-7 years (remember quantitative easing?) - trying to get target inflation and to make their products competitive against China, and then we had C-19 which caused demand to plummet - all the time money was printed. As world economy returns to normal - suddenly there is massive amount of money floating around and yes, you bet its going to cause inflation. Quite possibly that inflation is going to go out of control. War in Ukraine and associated sanctions and gas and oil price increase are just nice excuse for poor decision making in past half of a decade or even more. Inflation was already increasing from beginning of 2021
  24. I had issues with too loose belt (it was fairly tight - but it still needed more tightening), but that created 13.6s issue - as teeth meshing was improper. Not sure what could be issue with belt that shows only on full revolution by stepper shaft. If there was something wrong with belt then it would either show on each tooth or on whole "belt revolution" - which is more than one complete worm period - and that is 10+ minutes. Maybe there is something wrong with idler? Can you inspect it to see if it has a bump on it or if it is at an angle or whatever? Don't know what the period of idler is - probably related to its diameter (diameter of part that holds belt - from image it looks like the same as motor gear +/-). Idler can sometimes "slip" - it is not in sync and won't produce exact period - but it will be close to what can be calculate from its diameter.
  25. That is really strange. If you changed the gear itself - maybe it is the motor (shaft) that is the problem? Can you perhaps swap those (I'm not sure if they are the same, but they did look the same from what I remember when I was doing mod to mine HEQ5)? Otherwise, I can't really tell what could possibly be ~2minute long as far as periodic error goes.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.