Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I based my recommendation on the fact that OP wanted to start of with mirrored system. Wasn't really trying to talk anyone into anything. Out of all mentioned scopes that have mirrors - I still think that 5" or 6" F/5 newtonian - or that Maksutov Newtonian are the best - although MN is more expensive. I also think that we should present people with all options. I know that most people would be more comfortable with refractor scopes to start off with - but that does not mean that there are no people that would manage newtonian to start with. I took some of my first images with F/6 8" newtonian. Yes, they were poor images - but I don't think I would be able to produce better images with 4" APO at that stage .
  2. I would actually recommend 5-6" F/5 newtonian as bigger starter scope with mirrors. There are two popular models Skywatcher 130PDS and 6" F/5 by different vendors. Someone recommended TS Photon newtonians instead - these are made by GSO rather than Synta and are quite fine. More important "metric" than telescope size is something called sampling resolution. It depends on both used camera and selected focal length of telescope. I'm assuming you'll start with DSLR type camera with APS-C sized chip. Most such cameras nowadays have pixel sizes that are good match for ~600-700mm of FL. 130PDS is F/5 instrument and has 650mm of focal length. 6" F/5 telescope will have 750mm of focal length - so both are good matches for DSLRs and general working resolution of about 1.5"-2"/px You can get 6" F/4 telescope - which has 600mm of FL or even 8" F/4 instrument with 800mm FL - but those are fast telescopes and collimation and keeping everything squared is going to be problem for a beginner. Newtonians are going to require coma corrector - so factor that in and coma corrector needs to be placed at certain distance to camera so you'll need proper adapter to connect it together. Exception to above is nice imaging scope (more expensive) that has built in corrector: https://www.teleskop-express.de/shop/product_info.php/info/p12005_Explore-Scientific-MN-152-f-4-8-Maksutov-Newtonian-Telescope-with-Field-Correction.html That is Maksutov Newtonain telescope. It ha 730mm of focal length and you don't need to purchase coma corrector for it. It has big meniscus lens that makes nice flat field suitable for astrophotography.
  3. If you connect your mount to a computer - you can automate whole process. - take exposure - plate solve - correct mount position This will help you get on target in minutes every time - it also helps with framing as it will display FOV angle and all. What software do you use for image capture? Most software have option to plate solve and you just need ASCOM driver for your mount to be able to "sync" mount to correct position and direct it to go to wanted coordinates by software (like Stellarium or CDC)
  4. There is link on this page that you should follow: and on this new page - you'll see the details:
  5. Astrometry.net will provide you with that information as soon as plate solve is finished. If you don't have an account - you should wait for job to get finished and for the page to refresh with relevant info. If you have account - I think you can log in later and examine all your plate solves.
  6. We agree on that, problem I was trying to point out is that people often don't think of aperture at resolution and only think in terms of F/ratio - and conclude that "faster F/ratio" scope is better option than "slower F/ratio" scope for imaging. This is why I pointed out your sentence "But they both can get away with shorter exposure times because of the gain in F-ratio".
  7. Aha! F-ratio myth! What if there is a third person with the same scope and same camera that uses neither focal reducer nor "aperture amplifier" and simply bins their pixels x2. All three of them now are a tie in terms of time to set SNR and all three gain over stock setup - but only two of them due to gain in F-ratio. How can we explain third person's improvement with original F-ratio if F-ratio is what determines speed?
  8. Back on original topic - yes this works and it works well if you are careful in how you process your data. In fact, I think most of people using LRGB approach do the same if they stretch the L and then transfer color to it later. Above is example of mixing data from two very different setups 8" F/8 RC with ASI1600 mono data and 80mm F/6 APO with ASI178mcc for color data. Good thing with this approach is that you can optimize the time use if you have two scopes and side by side setup. Above image was taken on two successive nights - about 4h of each Lum and color (but sqm 18.5 skies so lacking depth).
  9. Ppi used in image really does not change resolution of it when viewed on screen. It is for printing purposes mostly so resolution is still 1.1"/px. At 300ppi image would print 4 inches wide (as it has ~1200px in width) and with 72ppi it would print as 16.7 inches wide. In either case imaging at 1.1"/px and getting results that sharp (even with a bit of unsharp masking) requires very good seeing and very good mount.
  10. Regular AA 102mm ED - with FPL51 glass does have some color fringing due to CA and to be honest, I've seen couple of cases with AA 102mm ED-R which has FPL-53 glass - with also a bit of blue around bright white stars. I do know 80mm F/7 is going to have less. Do you have any images with OSC sensor and that particular scope?
  11. Seeing on particular night must have been exceptional. This is very high resolution image indeed! Did you use sharpening or is this actual resolution achieved? Star sizes at 1.1"/px are just incredible for ~4" of aperture.
  12. There are several things I would like to answer here. First - about amount of data in channel. If you do proper color calibration - then although you spend say 1/4 of the time on blue - you will have "proper" amount of blue. Only difference will be in noise. For any given SNR you can boost your signal to needed level - but since you have fixed SNR with set number of subs stacked - you'll also amplify noise to match (SNR does not change when multiplying by a number). However - above has nothing to do with saturation. Don't mix the two. Color in astronomical images has been topic of much controversy and I'll offer my view on it. You want to preserve RGB ratio in order to preserve physical property of recorded light. This is best we can do as far as color goes. This also means that we need to do color calibration properly if we want to be able to do that - and that means that we will end up with RGB ratio for each pixel that represents color of the light in that pixel. If you only shot 1/4 of the time for blue channel - it will be multiplied with certain number to get accurate RGB ratio (color calibration) - that will also multiply the noise. You will end up with proper RGB ratio for your image after color calibration - but channel that lacks exposure will be noisy. In fact - I would say that if you are going to sacrifice a channel for imaging time - let that be red channel. Blue channel is attenuated the most by atmosphere. Sensors also tend to be the least sensitive in this part of spectrum. Even if you dedicate same time to each of channels - blue is going do get worst SNR because of that. There is also photon energy. Blue part of spectrum has shortest wavelengths. For same energy in R, G and B - given that shortest wavelength photons have highest energy - there will be the least photons in blue part of spectrum. That means the least signal as signal with photo detectors represents photon counts. If you can - expose the blue channel the longest. By the way PCC in PI will not produce accurate color. In the end - you need to think how you ended up with less data. There are multiple ways to end up with less data. Say you only shot 1/4 of subs in blue in comparison to red and green channel - but you used average stacking method, and you used same sub duration. Then you "don't need to do anything". One sub will contain the same amount of signal as stack of 20 or more subs using average method. It is only the noise that will be different (average of 20,20,20,20 is the same as single 20) If you used 1/4 exposure time for blue channel than others - then you need to multiply value of blue subs with 4 to "equalize" them ( if you have 5,5,5,5 and you add average them you get 5, while others will be average of 20, 20, 20, 20 = 20 - so you need to multiply 5 with 4 to get 20). If you used same duration subs - but used additive stacking - you again need to multiply blue channel with 4 to "equalize" things (again 5+5+5+5 is 20 - so is 4x20).
  13. It depends on cameras used and criteria of winning. If they use the same camera and they use same processing method (ie - same pixel binning if any) and criteria for winning is SNR achieved in set amount of time or time spent to reach set SNR on the same target - provided that both capture intended target completely and don't need mosaics due to difference in FOV, then: It is a tie!
  14. I personally saw that background had red cast and downloaded the image and opened it in Gimp. There you can inspect histogram of each channel and see where histogram peak is. There is also value inspection tool (which you can set to sample average of few pixels to avoid noise). Another way to do it is to open image in ImageJ and run statistics - there you can get mean and median pixel values (median is better measure of background values as it is less sensitive to high values that come in stars).
  15. It is not my definition - I was just trying to remember numbers that @ollypenrice said he uses most often for background level. I do consider his images to be good and by extension - you can conclude that it is recommendation for good value - not definition. I don't have definition of good - I prefer darker backgrounds but not too dark and although I process most of image "by the numbers" - background is something that I do by feel in each image.
  16. I was not aware that I was "misleadingly wrong on any number of occasions". Was I wrong in this case? And if not, why did you change the image to correct the background then ? Could you please clarify when did I say that majority of images presented on this forum are bad?
  17. https://www.orionoptics.co.uk/ODK/odk12.html for example Cheaper and larger flat field
  18. Just had a thought. This expansion means that we should be able to detect density change in observed galaxies as we look further away? Space is expanding and galaxies were earlier closer than they are now. As we look further away - we are also looking back in time. If we assume homogeneity of universe - and there is expansion of universe - well, when we look at distant galaxies - they should be much more bunched up than we would assume from density of near by galaxies. Is this so?
  19. Nice one. Maybe try to wipe the background? This has distinct red cast to it. Median pixel value are 35 for red, 30 for green and 28 for blue. Good background should be around 18:18:18 or so?
  20. Part of the color problem has to do with the way data is stretched. That is particularly present with CMOS sensors, as CMOS sensors enable shorter exposures so signal is particularly faint in such recordings and needs to be stretched more strongly. "Regular" stretching simply desaturates colors - because we use non linear transform and RGB ratio is not preserved. Instead - one should preserve RGB ratio and stretch only luminance / intensity of light to preserve color.
  21. First I need to point out that I ran some tests on WinRoddier software with synthetic data and it turned out that test is not reliable (or my test methodology is flawed). You can read about results of these tests here: If you still want to try this test - then I can give you a brief overview of that is involved and also point you to some PDFs that contain detailed instructions on how to perform the test. 1. You start by using small utility program that helps you calculate defocused star pattern size. You enter focal length of telescope, pixel size, level of defocus (which should be about 20-30 wave) and it calculates how many pixels your defocused star should have 2. Next step is to find suitable software that will measure size of features on screen - like screen ruler or something like that. You'll need that to get wanted level of defocus 3. Start some capture application like SharpCap and record about 1-2 minute of SER video (similar to planetary imaging) or sequence of fits files. You'll need to make two recordings - one for in focus image and other for out focus image. Defocused star image should be calculated size in pixels (you use ruler app to measure it and adjust defocus). 4. Use planetary stacking software like AutoStakkert to stack those recordings and produce single stack of each - in focus and out focus. In this step it is important that you don't do anything fancy with the data - don't auto stretch it, don't equalize histogram, don't sharpen - don't do anything except regular average stacking with alignment 5. Start WinRoddier software and import in / out focus images and enter telescope parameters and hit "calculate" That is what is generally involved - rather simple. There are things that you should be careful about like - choosing the night of relatively good seeing (can be full moon - it won't be a problem), scope needs to be properly cooled down and collimated, star should be placed in center of the FOV - on optical axis, it should also be high in the sky to minimize atmospheric influence and so on ... If you wish, I can share those PDFs with you - but you can also find them in files section of WinRoddier group. It was previously hosted on Yahoo groups, but now seems to be moved to groups.io or something like that (let me see if I can find that for you). https://groups.io/g/roddier (you'll need to apply for access and yes, all material is there in files section - just browse thru 3 pages and find one that has: Roddier_Ver.3.0_Quick-Start_Guide.pdf and New_WinRoddier-3_User_Manual.pdf )
  22. Get yourself a copy of Stellarium application. It is planetarium application that will show you what is in the sky for your location and selected time. Look at the symbols that I marked. Green squares denote emission type nebulae / HII regions. Circles with little spikes denote planetary nebulae. These are types of targets that you are interested in. Planetary nebulae are usually small in size - so go for "squares" rather than circles with spikes You'll have additional info (size, brightness and such in top left corner once you select your target).
  23. Problem with people that don't understand the theory yet believe they do is that they will not refrain themselves from stating their version. Best course of action in these cases is to point out flaws in their reasoning, present them with evidence to contrary and offer accurate version of theory. Sometimes that does not help either, but at least others will follow presented correct interpretation and evidence and draw their own conclusion.
  24. Well, I think that in the end it comes down to what we mean when we say color. I know - its strange, color is color, right? Well, it turns out that there are different meanings to the word color. I'll use another word to explain this - temperature. We use term temperature to express physical property of some object. We can measure it with a device and we assign numerical value to it. We can say - water in this bowl is 8°C. Then there is temperature that we feel. We can be warm or cold, or just right. In fact we have whole spectrum of words that we use to describe how we "feel", or what the temperature of the object "feels" like. However, there is difference between the two. Objective temperature is something many people can measure but we all will agree on measured value given certain standard of measurement to follow (units in which we measure, way we measure and so on). Subjective temperature is personal feel and two people judging same temperature can give contradictory answers - and neither of them needs be wrong. Imagine above bowl of water at 8°C - in hot summer day in Caribbean. You wear nothing but swimming shorts and it is rather hot outside. You put your hand in that bowl of water and you say with confidence: "This water is pleasantly chilly". Take same bowl of water at same temperature in wintertime in your back yard. You've been observing at your telescope lightly clothed (however this is unlikely in real life) for couple of hours and you are starting to treble from cold. Now you place your same hand in that same water and conclude that temperature is now "Comfortably warm". How can that be? Well - that is subjective feel. Similar thing happens with colors - our perception of the color changes with conditions and it is relatively hard to simulate "feel" of a color in different conditions. Color appearance models try to do that and when we talk about white balance - it is actually part of our perception and not color processing of sensor data. That is why there is no white balance in astrophotography. We are not attempting to recreate the "feeling" - but rather to record and synthesize equivalent spectrum. That part we can do without ambiguity - in the same way people are able to measure temperature and all agree on what the value of measurement is. They can even produce water that is exactly 8°C in temperature and different people sensing that water will have different "feel" - but it will be the same temperature. Color reproduction and matching has nothing to do with our perception. Color appearance has - but that is something completely different. I think that as a first step - it is important that we know how to accurately record and reproduce color. Only then we can deal with color appearance if we wish to do so.
  25. Very quick processing in Gimp gave me this (mind you - I did not touch color balance nor pushed saturation - just a simple luminance stretch - will give you steps to reproduce this if you like): DSS background calibration can skew color in the image. It is needed for kappa sigma clip (algorithm requires normalized frames). Maybe best approach would be to use simple average stacking and no background calibration - just to see what sort of color information is there. Issue with astro cameras is that they don't have color correction matrix embedded in them like DSLR cameras do. We get raw color from them and that color needs to be adjusted in order to be accurate. I can talk about this process, but I must warn you that there are people that insist that what I'm saying is wrong and that there is no "proper color" for astronomical images.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.