Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. What I was saying is: For any given telescope that you pair with ASI290, there is alternative telescope that you can pair ASI294 that will provide better SNR if correct processing is applied. This is true simply because surface of ASI294 is more than 3 (and difference in QE accounted) times larger than the surface of ASI290. All that surface can simultaneously capture light - as if you had same sensor size but larger number of sensors (multi scope rig). No, Bayer pattern is exactly the same as shown in diagram - every Bayer pattern looks like that and only difference is in order of pixels. You don't necessarily need to look at it as groups of 2x2 pixels - you can look at it as interspersed sparse matrices of red pixels with blue pixels with green pixels. I do all my OSC processing like that - I don't debayer into RGB image but rather split bayer matrix into mono R subs, mono B subs, and two times the number of G subs that I stack as mono subs and later do color combine. Yes, problem is EEVA software support. Best is not to debayer by interpolation at all - that just blurs data whichever way you choose to interpolate missing values. Best is to just simply say - we don't have missing data to be sampling at resolution given by pixel side length - but we are perfectly sampling at twice that resolution. No need to try to go higher resolution by interpolation - if we need higher resolution we can always increase focal length (even if by using barlows, but best to choose suitable scope in the first place). There is alternative algorithm that is sort of "have your cake and eat it too" - that would be Bayer drizzle. However, that is best applied in planetary / lucky imaging approach when working at extremely high resolutions where atmospheric movement and mount precision (or rather imprecision) guarantees good dither between subs. Bayer drizzle works much better than regular drizzle, since pixels are already "shrunken" (size of pixel vs sampling rate) and you don't need to be under sampling to use it. Completely agree with that. I believe it is economies of scale here. If one wants to do EEVA on a budget - then I would recommend used mirrorless camera with APS-C sized sensor. That will be both cheap and more sensitive (when paired with suitable scope) than any mono offering out there. Again, software side of things is lacking - but there is ASCOM driver for Canon and also for Nikon if I'm not mistaken (maybe there is even one for Sony cameras, but I'm really not sure on that one). Completely agree with you on that. I did not want to turn this into something too serious and take fun out of it - just wanted to point out that things can be looked at from a different perspective which would give you more possibilities.
  2. Well, my point was that multi cell organisms suffer much broader spectrum of illnesses, so life does not get more resistant in principle as evolution takes place.
  3. I agree with most that's been said and want to add that CMOS cameras have advantage in lower read noise. EEVA is about short exposures and stacking them. Read noise is only thing that makes a difference between single 10 minute exposure and stack of 600 one second exposures. Lower your read noise - less difference between the two. For this reason, CMOS cameras (low read noise ones) tend to perform better for EEVA. CCDs have advantage of being able to bin without read noise increase. On the matter of driving the scope - you can probably start with EQ platform. This is going to be a bit more involved way of "observing" as you'll need to locate objects manually by aiming your scope and if you have small sensor - it will sometimes prove difficult to place object on sensor. Goto is much better for this. If you plan on using small sensor (most affordable option) - look into x0.5 reducer as your scope has quite a bit of focal length - 1200mm. Here is what can be captured with such scope and small sensor camera (like ASI120 - 1/3" sensor) with x0.5 reducer: M82 M81 M42 This was from location with substantial light pollution - red zone bordering on white. I don't remember capture details as it was some 5 years ago (but I think I used 4s exposures and stacked few minutes at best).
  4. That is actually very profound statement. When we think of the time, we think of it as passing at exact intervals - always "flowing" at the same pace. But that is of course not true. Even without relativity, we can only say that things happen concurrently - like two physical processes happening at the same time - clock hand moving one second (smallest measurable interval) and something else happening. Almost like in steps. But do steps "last" the same amount of "time" or is only assumption that we can make - events happen in sequence - they are ordered - we only know what is before and after and that divides events to past and future with respect to certain event. But then steps in relativity and we find out that order of events is not guaranteed. One observer can measure event A to happen before event B and another observer can measure opposite - event B happening before event A. So even our sequence is no longer valid assumption - only thing that is left is fact that everything did not happen at once
  5. Sure you can use mono camera for EEVA - if that suits your "observing" style. Some people like to see the color of the objects and are happy to sacrifice some sensitivity in order to get that. Another important point is that you can get very large OSC sensor for reasonable money - as mono sensors tend to be more expensive. Sensor size = speed. This is because of the way colour sensors are used. If you use them properly they will can out perform mono sensors (if they are larger enough). Let me give you an example. First on resolution. In order to fully exploit resolution, you must understand that OSC is in effect - sparse sensor. Every other pixel is capturing information that you want to put together. Take the red channel for example and look at Bayer matrix: R kernel is what is captured for R channel (Similarly for other channels) - there is pixel than "empty" then another pixel, then empty - you are effectively sampling every other pixel and your sampling resolution is halved. This is not issue as you can match your sensor to wanted focal length to get resolution you want. If your mono camera is sampling at for example 2"/px then you need to choose your focal length to be 1"/px using same formula for OSC camera pixel size. This will ensure that you are effectively sampling again at 2"/px (1" - red, 1" - none, 1" - red, 1" - none, every 2" is red pixel so 2"/px is red resolution). Second important thing is that your EEVA stacking software knows how to debayer in super pixel mode to exploit this (or better yet - to split your bayer matrix into 4 subs, each behaving as mono - one Red, one Blue and two Green subs - with half of resolution of the chip). Now, I'll show you how can ASI294 be more sensitive than ASI290 although it has lower peak QE 75% vs 80% for ASI290 and it's OSC so each pixel is effectively using only 1/3 of spectrum. Let's say you are using ASI290 with F/5 newtonian like SW 130PDS. Take your ASI294 and pair it with 10" F/8 RC scope. With ASI290 and SW130PDS your resolution will be 0.92"/px. With ASI294 and 10" RC, per pixel resolution will be 0.47"/px, but since we said color is sampled at twice lower resolution - actual color pixels will be sampled at 0.94"/px - almost the same as ASI290. How about light gathering? One pixel of ASI290 captures full spectrum of 130mm on 0.92"/px with 80% QE Four pixels of ASI294 capture part of spectrum of 250mm of aperture at same surface (same effective pixel size) but 75% QE. To sum up photons of ASI294 we add 1/3 * 1/4 + 1/3 * 1/4 + 1/3 * 1/4 + 1/3 * 1/4 (each quarter of whole pixel captures only 1/3 of spectrum). That equals to 1/3 of whole surface. So we have: 130^2 * 0.8 = 13520 vs 250^2 * 0.75 / 3 = 15625 So if you sum R, G, G and B you will get more photons, better SNR, and also RGB information. As far as cameras go, ASI294 if paired with suitable telescope will perform better than ASI290 when paired with telescope of your choice. But again, you really need different telescope to pair ASI294 with and you need to have EEVA software that can properly debayer image and extract luminance from color pixels by adding them together and then apply RGB information on top of that. Resolution will be the same of course as both will sample at same sampling rate.
  6. Have you taken some images so far? How do you do your capture and processing? What equipment do you use? With planetary imaging (and I suppose we are talking imaging here since you talk about editing your images) it is all about aperture but also in optimizing your capture procedure and processing. To resolve detail, you need to have aperture, but once you have aperture - you need to apply very special technique called lucky imaging to capture data and then you need to stack that data and process it in particular way. Simple phone at eyepiece or DSLR attached to telescope is not going to give you very good results. On the other hand, proper imaging technique will give you very good results with even modest apertures. These were taken with 4" telescope: 4" is considered to be very modest aperture for planetary imaging. In order to start getting really good images - you need to look at 8" or above (having 8" would for example potentially double the size of planets in the images while keeping sharpness and level of detail). Probably most affordable 8" planetary scope would be 8" f/6 newtonian on EQ platform. You can get SW 8" dob scope for £289 at FLO and you can DIY equatorial platform for about additional £100 or so. Throw in good planetary camera like ASI224 for another £220 and a barlow for another £50 and you'll be all set. Another good option would be scope like this: https://www.firstlightoptics.com/stellalyra-telescopes/stellalyra-8-f12-m-lrs-classical-cassegrain-telescope-ota.html and you would need at least EQ5 class mount to carry that scope for planetary imaging.
  7. Not necessarily. I'll give you couple examples here: 1. Sloths - they are certainly not as fast as their ancestors. Here next generation is simply not faster in every case 2. Disease - can you name any disease that attacks single cell organisms like amoebae? 3. Sight is known to be lost in several species for example Theory of evolution just simply states that there is fitness function and that next generations will "score higher" on that fitness function. Fitness function must include better surviving odds - or rather fitness function implies higher survival probability under given environment for long enough to pass genes down the line. Evolution really does not care what happens to any individual once they procreated (this is why we get cancer with much higher probability as we get older past our prime reproductive age).
  8. “I'd far rather be happy than right any day." "And are you?" "No. That's where it all falls down, of course." "Pity", said Arthur. "It sounded like rather a good lifestyle otherwise.”
  9. I don't think that there is a clash. First - life does not evolve into something better and improved - it evolves into "more suited" to environment. Fact that locally there is decrease of entropy - does not mean that there isn't overall increase in entropy. It might seem that we are thriving now but look at the mess we made to the planet - overall we did more "harm" than "good".
  10. There is literally zero evidence to support this claim
  11. Probably better way to do it would be like this: Do basic linear stretch on all three channels and convert to 16bit. This ensure that you keep good data resolution "at the bottom of the range" and yet to have 16bit data for starnet. Take one channel - usually Ha as stars tend to be tightest there and after star removal in starnet - do subtraction so you are left with star only image. Do your processing on starless data and combination and then layer star only image with very slight stretch over final starless composition. This ensures that you get your stars back in, that they are white (no annoying purple fringes and weird colors) and they are tight. With level of stretch you can decide how much "presence" you want from them - if you don't really like emphasis on stars - just do slight stretch on them before blending.
  12. Stars look like they are very slightly out of focus? Is this due to optics, processing or indeed focusing issue?
  13. I hear the universe whisper: "There is no fate but what we make for ourselves"
  14. This is basically the third method I've mentioned but with some fancy electronics to make it work
  15. Direct replacement would be this scope: https://www.teleskop-express.de/shop/product_info.php/info/p8866_TS-Optics-Doublet-SD-Apo-72-mm-f-6---FPL53---Lanthan-Objective.html Both Altair Astro and TS source their scopes from same manufacturer and there is a number of scopes that are basically the same scopes between these two. One of these scopes is scope in question and its TS equivalent linked above. There are cheaper alternatives that might have slightly worse color correction but that really should not show in purely visual, maybe only in imaging: https://www.teleskop-express.de/shop/product_info.php/info/p1151_TS-Optics-70-mm-F6-ED-Travel-Refractor-with-modern-2--RAP-Focuser.html and https://www.firstlightoptics.com/pro-series/sky-watcher-evostar-72ed-ds-pro-ota.html Other differences would be slightly worse "build" quality: - poorer focuser unit (2" crayford vs 2.5" rack&pinion) - fixed dew shield in case of SW unit (although it can be removed - I'm not sure I would remove it for transport - I prefer retractable dew shields).
  16. Not the fancy big one - like lunar ranger laser thing, but rather those pocket units that you can use to measure your house or distances in your back yard? I recently contemplated distances for artificial star to minimize spherical aberration in telescope testing and then I realized - it is not easy to measure say 60 or more meters precisely on rough terrain. For that reason I searched the net and found out that these hand held laser units are quite precise (about 2mm or so) and rather affordable - there are models for less than £50 that have ranges 80+ meters. Then it hit me - I have no clue how they operate. I do have good understanding of how they could possibly work: - pulse measurement (lunar ranger experiment) - intereference of light - amplitude modulation interference First one is of course very difficult for small distances with hand held unit. Speed of light is 300,000 km/s or 300 km in millisecond, or 300 meters in micro second or 30cm in nano second - see, we are approaching atomic clock speeds here, do you know a clock that can measure nano seconds with precision - not to mention the need to measure less than 1% of that to get 0.3cm accuracy. Second is good for very short distances because wavelength of light is in nano meters - less than one micron. It is perfect for measuring precision curve on telescope mirror and such - but not good on 80m+ distances. Third one seems like reasonable option - except, how strong does laser need to be and how good sensor needs to be to measure enough photons per part of second needed to get down to two millimeters of precision. Good thing about amplitude modulation interference is that one can vary frequency of modulating wave and that gives us rather good way to "switch" between distance units. We can use one frequency to measure meters with some precision and then use other frequency to measure 1/10th of meter and then another again for 1/100th of meter and then fourth frequency for 1/1000th of meter. However, we need something like 1mm wavelength for 2mm precision. Speed of light is again 300,000km/s and that converted in millimeters per second is again very short amount of time. What kind of sensor can do exposure that fast, and how many photons can we hope to capture to get decent SNR and figure out where on amplitude curve we are currently? Can anyone shine some (laser) light on this?
  17. According to this: https://en.wikipedia.org/wiki/Ultimate_fate_of_the_universe We have a few competing theories: Big Freeze or heat death, Big Rip, Big Crunch, Big Bounce, Big Slurp and Cosmic uncertainty I did not know about Big Slurp which is pretty interesting as it says everything could disappear in an instant (due to Higgs Field and true/false vacuum thing - not going to pretend I understand it), and Cosmic uncertainty which basically means that we have no clue (well educated guess of not having a clue ). I vote Big Freeze as being most realistic from current data.
  18. Back to original topic - what will the actual fate of universe be? As far as I remember - "Big rip" is not certain to happen (even with current accelerated expansion of universe) or is it?
  19. Could be. If we examine the data on two sensors, IMX183 does win and it is about twice as sensitive as KAI-11002. We have definite data on KAI-11002 sensor: It is about 32% QE at 656. We have only relative QE graph for IMX183 and only estimated peak QE. Again, relative QE curve at 656 is about 78%. Peak QE is often quoted to be 84% so if we multiply the two we get 0.78 * 0.84 = ~ 0.65 = 65% This would put IMX183 at twice the sensitivity at Ha. On the other hand Christian Buil found on more than one occasion that published QE graph differs to measured QE graph and peak QE is less. According to his measurements, Ha QE of ASI183 is closer to 51% and peak QE is 80%. source: http://www.astrosurf.com/buil/asi183mm/ In the end linear measures like aperture and pixel size need to be squared to translate into exposure time (surface is important), but QE is linear dependence. If we take into account difference in QE and make it so that IMX183 is twice as sensitive - we still have factor of x ~ x5.92 exposure time in favor of Atik11000 + FSQ. This is without taking into account any LP noise that will be roughly x3.2 times larger for 2 magnitude difference in SQM reading for example (even with NB filters - since both setups use NB filters).
  20. In that tone, here is my direct comparison between QHY and ASI. I used to have QHY5IILc and I now exclusively have ASI cameras. This is of course limited "sample size" to be able to draw any serious conclusions, but at the time I had QHY camera - it was very much lacking in drivers department / software support. I did not have such issues with ASI range. On the other hand, I had some issues with ASI range - that turned out not to be issues with cameras themselves but rather USB system. Once I added powered USB hub and changed USB cables - everything works without any issues. Fact that I now have 3 ASI cameras says something about their quality I suppose.
  21. Pay attention to the fact that you imaged it with camera that has 9µm pixel size with 106mm of aperture. My guess is that you used FSQ-106ED with 530mm of focal length so your sampling rate was 3.5"/px. If we calculate some sort of "speed index" - in form of resolution * aperture = 106mm * 3.5 = 371, we can compare to above setup in some ways. On the other hand, here we have 1.54"/px with 70mm of aperture so "speed index" will be 107.8. Ratio of the two is ~3.44, or if we use square values (and we should since we want aperture area and pixel area) - it is ~11.84. Above image is equivalent of about 35 mins of imaging with 106mm and 9um camera (it would be a bit more if we take into account QE of both cameras at 656.8nm as ASI183 is likely to win there). On the other hand, sky brightness has some impact since this is NB, so I guess your dark skies would win there. Would you expect noise level in this image to be that of less than one hour of ATIK 11000 and FSQ-106ED for Ha on this target?
  22. Exactly, it's not overly complicated to turn reference design into PCB layout - there is software to do that easily and board printing companies expect that you send them layout in particular format.
  23. Indeed, two possible reasons: 1. Wrong Bayer matrix type selected when debayering 2. Just missing color balance. In fact, most of my planetary captures with dedicated planetary cameras turn out rather green, like this: (this is actual Jupiter stack without any processing) Here are details on how to obtain color: But probably the easiest way to do it would be simple auto color balance function - that works pretty good. Here is what Registax 6 gives for your image: Unfortunately, it looks like it was case 1) - wrong Bayer matrix. When I try to correct above "by hand", this happens: This color usually happens when Bayer matrix is wrong.
  24. I highly doubt that astronomy camera manufacturers actually have their own R&D teams that put together all that. I believe they just take the reference implementation and pack it into suitable housing. I bet that if one took two different cameras apart, one would find same electronics next to sensor.
  25. I don't believe there will be much difference in images produced with same sensor regardless of the camera vendor (if other variables are kept reasonably the same - scope, mount, sky quality, etc ...). I think it is more about things like - how good camera casing is, what is the weight, are there any reflections from camera sensor cover window, what about power consumption. Is USB connection stable? How good are the drivers. User experience in general. Sensor will be the same after all so data produced will be very similar.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.