Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. You can do it in ImageJ (at least I did) - and yes, do whole range of luminance subs to see if you can find anything useful in how it behaves. This is only 30 first subs (calibrated and normalized and binned x10 to increase SNR of individual sub).
  2. Ok, so I took first 30 subs and examined them, and I think it's not high altitude clouds - it is some sort of external light - some reflection that changes as you track the sky. It's not calibration issue - but it does show that it changes from sub to sub - and in fashion that has to do with tracking the sky - look at this animation of your subs (stretched):
  3. I did download masters and they seem fine (I still can't access individual subs on Microsoft one drive, but never mind that). Here is luminance that you stacked: It has all sorts of weird gradients - but this actually looks like high altitude clouds rather than something wrong with your imaging train. I can download individual subs from One Drive (just checked) and will download some of luminance subs in order to see if I can spot any sign of high altitude clouds passing.
  4. Improvement is visible but it is still over sampled. Have you considered doing software binning on that data? It will let you manage it better as SNR will be higher.
  5. @BrendanC Would you mind "pruning" a bit above data (I'm on a metered connection and smaller download would be better) and placing it on another download location (it asks for account to be able to download and I don't want to register and account with Microsoft at this point). As far as pruning - take all your calibration files (darks, flats, flat darks) for each channel and simply do average stack of those. Just make sure result is 32bit and there is no scaling to 0-1 range but rather ADUs are preserved.
  6. I don't see much point in doing that - calculating "effective" F/ratio. F/ratio is only important for beam characteristics (angle at which light comes in) - and that won't change with central obstruction - it will still be F/5 beam. What you might want to calculate is effective aperture - or what is equivalent clear aperture of 130mm reflector. You'll need to take into account secondary obstruction - but also reflectivity of mirrors, which is probably going to contribute more to light loss than central obstruction. Secondary is most often quoted by its minor axis - so that is diameter of profile presented to incident light. Here is calculation assuming 94% reflectivity (this is a bit harder figure to find and it is often given as peak reflectivity - but it changes over visible spectrum and is maybe better represented by 93% for regular and 97% for enhanced coatings). clear_aperture = sqrt((130^2 - 47^2) * 0.93 * 0.93) = ~112.7mm of clear aperture. Do be careful - above 112.7mm does not mean that 130PDS is equivalent to 112.7mm refractor in terms of gathering power. Refractors also have loss of light in their optical elements and reflection on air/glass surfaces. Say that you loose something like 0.5% per air glass surface and you have triplet scope - it would then need to be 114.4mm aperture scope in order to have 112.7mm clear aperture. I'd say that 130PDS is equivalent to 115mm refractor, and in fact - this refractor would be good replacement for 130PDS: https://www.teleskop-express.de/shop/product_info.php/info/p3041_TS-Optics-PHOTOLINE-115-mm-f-7-Triplet-Apo---2-5--RAP-focuser.html As with x0.8 FF/FR it has ~640mm of FL and same equivalent clear aperture.
  7. I had no issues with short flats - in fact, I used few millisecond exposures for flats and it worked fine with my ASI1600MM-cool.
  8. Will do as soon as I find some time (need to urgently fix something on one of my projects).
  9. When you bin you increase SNR - that part is easy. In order to split stars from nebulosity, Starnet expect 16bit data - which I really don't like - I like to process images at 32bit per channel, so in order for it to do its thing, I need to "pre-stretch" image. With binned data - more faint stars are kept as they show up in initial pre stretched image and can be distinguished from the noise. That is first part. Second part is in how much you can sharpen your data. Although I do selective sharpening - I do like high SNR as much less noise is generated / brought up to the surface by sharpening process and it is easier to control. This makes possible for tighter stars and more detail in nebulosity as well. Oversampled look - well, compare the look of these three stars - just central core: This is same star at three different bin levels (or sampling rates). Each of these is 100% zoom. I like when stars stay points even at 100% and don't turn into little (or big) balls.
  10. Don't really know if I could pull out that much sharpness if I did not bin. I'm not saying it can't be done - but it's certainly above my skill level. Enough SNR just makes it possible for me.
  11. Well, in order to get good tight stars and good detail I needed SNR and I don't particularly like over sampled look when fully zooming in, so I binned data x3. Binned x3 gives about 1.215"/px (quick measurement) which is just about right for this data to make it look sharp at 100%. Additional SNR makes it a bit easier to sharpen properly without introducing too much noise.
  12. Late to the party with my process, but I do hope you like it. I went for natural looking stars, airy looking nebulosity and as much of detail as I could pull out ImageJ / Starnet v2 (standalone) / Gimp
  13. If we are talking color reproduction, I'd probably choose following combination: Wratten #80A: #8: And regular UV/IR cut filter. In fact I would use UV/IR cut all the time, and use three channels: No filter, #80A and #8 These are all very affordable and have very high total transmission. They separate spectrum well enough so that we can get proper colors with color correction matrix. Only place where they have issue is around yellow/orange part (in above graphs) - that is 570-610nm range, luckily - most cameras have sloped curve at that point: and total per filter response will depend on both camera response and filter response multiplied.
  14. I guess that with flex coupler you risk sag or missing step (repeatability of focus position) if you put too much weight on focuser?
  15. In that case Omegon is clear winner in terms of affordability? Only 1252 euro without tax, import duty and shipping. @ONIKKINEN How much was your RISING cam - without local tax and import fees (only what you paid when ordering + shipping)?
  16. I think it's rather nice. Have you tried starless processing? Maybe that way you'll have more control over what is happening in the image. Care to share the data ?
  17. Actually, as far as color gamut is concerned, best filters to combine with mono are absorption filters rather than interference filters. Due to lower QE they are not as popular, but they are much cheaper. Good combination would be blue, green and orange. However, blue and green are only 70% at peak transmission. Up side with absorption filters is that you lower probability of nasty reflections (if properly implemented of course with AR coatings).
  18. This is great point. None of the people that actually discovered objects in space - had any idea what they were at the time. That did not prevent them from discovering and wandering (and I'm certain enjoying the process). Planets were thought of as stars that wonder around (as they move differently then stars in the night sky). Other galaxies are thought of as same as other nebulae - lumps of gas out there - we had no clue that they are enormous groups of billions of stars much further away.
  19. IR and UV would be visible. None of recorded colors would look like original. When shooting pure spectral colors, in principle we can record them (depends on camera gamut) - but in all likelihood we are lacking display capable of showing them. Take look at this graph: It is something called xy chromaticity graph and in this particular case it shows sRGB color space (what computer screens can usually show) versus whole gamut of human vision. Outer border of this shape is where spectral colors lie (and in fact they have been labeled with labels from 380 to 700 representing nanometers - wavelength of particular spectral color). Any type of screen that we produce and use to display colors is usually made with 3 components (we often call them R, G and B ). Each of those colors our screen use to mix resulting color will be a dot on above graph. Display is capable of showing only colors in triangle formed by those three dots (or if we used more primaries - only in convex hull made by connecting those dots). This shows that we can never render more than 3 spectral colors with trichromatic display (and only in case that primaries are in fact spectral colors). Look at this diagram here: It is same as above - but showing different color spaces. You can note certain things about some of color spaces. 1. Color space does not need to have only 3 components. CMYK for example has 4 and this in turn means that gamut won't be triangle (it has more vertices because of defined primaries - using color on paper is not the same as generating light so don't be confused that number of colors must be number of vertices - it is number of primaries (light emission) that is equal to number of vertices) 2. ProPhoto RGB is "virtual" color space - no display in the world can be made to show full ProPhoto RGB profile. Two of three colors it uses do not exist in nature (they lie outside of gamut). It is only mathematical color space RGB color spaces use different primaries (different blues, greens and reds) and you can't simply treat R, G and B as R, G and B always - you need to know what color space you are working with. This is the reason people get different colors for objects in astrophotography - they don't pay attention to color spaces (especially raw of their camera and conversion to standard color space). Some cameras can't even record spectral colors - they have narrower gamut. In order to uniquely record each spectral color - following must happen: When you look at your QE response for each color - for every X coordinate (wavelength in nm) - you must have unique triplet of RGB values - I market triplet of RGB sensitivity for 500nm above. This triplet must be unique in order to uniquely record this spectral color. We can see that in above image - we can't distinguish colors above about 900nm - sensitivities are the same so each triplet is effectively 1:1:1 - we can't know if we recorder 900nm, 950nm or 1000nm if we see pixel that has 1:1:1 values (or 0.2:0.2:0.2 - as actual value will depend on exposure length or brightest pixel in image if there is scaling to 0-1 of all pixels). On the other hand Mono + RGB is notoriously poor at recording pure spectral color - look at this: Take any spectral color in say 600-700nm range. It will only record red component and no blue or green. So any pixel value will be in form of X:0:0 - where X is arbitrary number (depends on exposure length). You can't distinguish any spectral color in that range. Same is true for most of G and B range as well. Bottom line: - UV and IR will be recorder recorded - No display will ever show image correctly (until they make display that can show individual spectral colors instead of working with primaries) - not all cameras are capable of even recording / distinguishing (they can record - count photons, but can't make distinction between different wavelengths) spectral colors.
  20. Interestingly enough all three have different body shape: TS one is fully flat - cylindrical: Omegon one is not - it has slightly fatter end, but only two "cooling grooves": Altair version has similar shape but much more grooves: are they all made by Touptek? Lacerta version looks like TS version and is clearly labeled as Touptek on body: (TS also clearly states for their version that it is Touptek).
  21. Here is nice short video covering types of nebulae
  22. Don't be put off by terminology that you don't understand at the moment. Give it some time and things will fit in their place. Above sentence is describing effect not much more complicated than for example: "When you flip the switch electrical current flows thru the circuit and light bulb emits light". There are numerous depths at which above phenomena can be explained - from very mundane - you want the light in your kitchen - so you flip the switch and light comes on and you can see better - all the way down to equations governing elementary particles and fields on a quantum level. However you slice it - it is the same phenomena - and you don't need to know deeper levels in order to understand and appreciate how kitchen light works. As far as astronomy and nebulae go - it is good to have some very basic knowledge of type of astronomical objects and how they are differentiated. Nebulae are shining gas in space, but there are several different kinds. There are emission and reflection nebulae. They differ by how they shine. Emission type nebulae shine much like flame shines (flame is gas that is hot enough to emit light) - but as pointed out above - process is a bit different and much more similar to neon signs (which is gas in a tube that is made to shine by electricity). Reflection type nebulae shine because there is nearby star that is illuminating them. It is very similar to shining a torch at smoke or fog - light from some source (like a star) is just scattered in gas. Emission nebulae can be diffuse, planetary type or supernova remnant Diffuse type are just gas floating around that is under influence of gravity while planetary type nebula and supernova remnant happen when star explodes. If it is small star - it won't make big explosion (it won't go supernova) and it just ejects outer layer of its material that then slowly spreads into outer shell that glows. Supernova are very violent star explosions that rip apart stars and eject star material all over the place - that gas also shines.
  23. Using white dots to designate something in star field image? Maybe not the best idea ever
  24. Meteors usually look like that when their trajectory is viewed at shallow angle (almost parallel to trajectory). You have faint part, then strong part as meteor hits thicker layers of atmosphere and then "fireball" - when it burns up in atmosphere. Trajectory has that distinct shape of projectile in gravity field - parabola. It just looks very curved because it is viewed at very shallow angle - it has much more delicate curve.
  25. If you have one - then yes, give it a go. That is emission region and good dual band (or more bands) filter will eliminate sky glow / LP background and improve SNR substantially (if you have significant LP).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.