Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,107
  • Joined

  • Last visited

  • Days Won

    12

Posts posted by vlaiv

  1. 5 hours ago, DrRobin said:

    Simply put.....

    F ratio determines how long you will image for (your exposure time);

    Aperture (and focal length) determine how much you will fit in.

     

    E.g. a 12" F/5 telescope will have the same exposure as a 6" F/5 telescope, its just that the 6" will cover a much wider field.

    Simply put - this is wrong.

    Many people forget pixel size - or assume unchanging pixel size. F/ratio does not determine "speed" of setup - meaning it does not determine how long you will expose.

    Imagine 8" F/10 scope and 6" F/5 scope for comparison.

    First one is used with ~11um pixel camera. Second one is used with ~4um pixel camera. I added ~ sign (meaning about) because I wanted to emphasize that in both cases you have around 1.15"/px sampling rate.

    Now you have 8" of light gathering vs 6" of light gathering - both mapping 1.15" of sky to size of pixel. 8" will win on speed, even if F/10 is "slower" scope than F/5 because it is larger aperture.

    I agree that you will have wider potential field with shorter focal length within limits of optical design (fully illuminated and corrected circle).

  2. Ah sorry, this is even better:

    Quote

    For astronomy enthusiasts eager for practical details, the galaxy’s heliocentric viscous perineum will occur this summer (in August to be precise) at a ridiculous distance of only 64 million light years with an amplitude of -3.14 at the most for an arc of 1687 seconds in length. As a result, the Andromeda galaxy will, at this precise time, appear in the sky even larger than the full moon!

     

    • Haha 2
  3. 19 hours ago, smr said:

    Funnily enough I then stumbled across this article which says that Andromeda will be nearer to Earth for the first time in 150 million years (!) than it has ever been before in August.... so it won't have been this close since mankind didn't exist!

    http://www.scienceinfo.news/in-august-the-andromeda-galaxy-will-move-closer-to-earth-a-cosmic-event-that-only-happens-once-every-150-million-years/

    Sorry to say - that article is full of BS (pardon my french).

    I'm not saying that to deter you from imaging M31 - I'm just saying for the greater good of readers.

    Due to motions of two galaxies and motion of our solar system inside our galaxy (orbiting galactic core), actual "distance" is changing every second (if we assume distance to be a distance between earth and single point in Andromeda - like galaxy core or center of mass or whatever). Speed of change is such that that you won't notice anything on wast time scales because total distance is huge and any small change will be very small fraction of total distance.

    M31 is about 2.5-3 degrees so it is about 5-6 times full moon.

    I do encourage people to read the article to have a laugh, as some seriously funny language constructs were used, like this:

    Quote

    The particularity of this recurring phenomenon (commonly called “circumstellar elliptical conjunction of coercive muscle elongation” in top athletes) is to allow our sun to be projected into the telluric zone of solar attraction like a projectile launched from a slingshot, every 150 million years exactly!

     

    • Haha 2
  4. 6 hours ago, ollypenrice said:

    It's frustrating that the big chips are colour at the moment.

    I can't say I've been convinced by the full frame colour data I've seen. Odd colour balance (very odd), large stars...  I don't think CMOS is quite there yet, myself.

    Olly

    There could be a number of reasons for your observations - not saying that it is certainly the case, but I'll list them anyway:

    - Large stars: Most people don't realize that when using OSC cameras they are in fact sampling at two time lower sampling rate than pixel size would suggest. After that, interpolation of sorts is used to "fill in the gaps" unless super pixel debayering is used, or more exotic - splitting of sampling matrices into separate subs. This means that you are artificially "increasing resolution" (trying to present image at twice sampling rate by effectively rescaling subs). That, coupled with the fact that OSC sensors have smaller pixels which in it self can lead to oversampling, means that stars will be "bigger" when viewed 1:1. Not CMOS fault - but rather due to way they are used.

    - I would throw in fact that CMOS sensors are cheaper than CCD and because of this more people opt for such cameras that can't afford precision mounts, but that might be moot point given that you have experience with hosted gear and I suspect that mounts used in such setups won't be average "consumer" level.

    On the matter of color - there is big difference in how mono + LRGB and OSC handle color. Just look at filter response curves. This however does not mean that one of those is "true" color. Both can and should be color calibrated and suitable transform found to represent true star color as neither will do that "out of the box". What I suspect is happening is that you are used to the way mono+LRGB renders color (without calibration) and it has become "de facto" color standard on how images should look like. If you take OSC color rendition - it will look wrong (in comparison to what you are used to). It is in fact the case that neither are rendering proper color without color calibration. If you do color calibration on both - you should get same colors (or very close colors - depends on camera/filter gamut compared to display device gamut) in both.

    • Like 1
  5. 8 minutes ago, Star101 said:

    TS Optics TS 65/420 Quadruplet scope

     

    Image was processed in Pixinsight

    Full image here

     

    Dave

    Could be due to some of the issues I mentioned - image looks like it was denoised a bit?

    At 1.18"/px it is probably a bit oversampled as well. In general that is not considered resolution that is oversampling "by default" - up to 1"/px can be "pulled off", but you need aperture for that. With only 65mm star FWHM is going to be at least 3" - 3.5". That is more in 2"/px territory. Do you happen to have stats on FWHM in your subs?

  6. 2 minutes ago, sloz1664 said:

    Just a thought, can you bin cmos chips ?

    Steve

    Sure you can - in pretty much the same way you can also bin CCD :D

    CCD on the other hand can be binned in the way CMOS sensor can't. CCD supports hardware binning that CMOS sensors do not support. But both can be binned in software. Difference between the two comes from read noise. CMOS sensors have quite a bit lower read noise (in general) and therefore suffer less from it's effects on software binning.

    42 minutes ago, kirkster501 said:

    Thinking of getting a new camera.

    I want it to be a CMOS one since I believe that is the strategic direction of imaging.

    I want to be able to use it for widefield on my FSQ (no reducer) but also on my Meade 14" so I need it to have an OAG.  A big chip so I could bin 2x2 everything on the SCT.

    Appreciate your thoughts please.

     

    At the moment, if you are looking for mono sensor, ASI1600 (and other cameras based on Panasonic chip) are biggest options. There are a few very interested options by QHY still being developed - I believe the price will be quite high compared to current CMOS offerings, have a look here:

    https://www.qhyccd.com/index.php?m=content&c=index&a=lists&catid=123

    Or, better, here is excerpt from the bottom of the page:

    image.thumb.png.839229ada319e0af30a45399fccfa0d6.png

    Check out last two entries. I wonder what the read noise will be like.

    • Thanks 1
  7. There are couple of things that can create this sort of background.

    First is way the image was stacked - or rather, algorithm used to align subs before stacking. Any sort of alignment of frames requires some sort of interpolation algorithm to be used (actual pixel values in aligned frame fall in between pixels in original frame due to sub pixel shifts and rotation). Different interpolation algorithms have different properties. Some of them cause unacceptable artifacts and are avoided (like nearest neighbor interpolation), some are straight forward to do but cause "pixel blur" (like bilinear interpolation - which is same as bin x2 under some cases). Often used one is bicubic interpolation - that one smooths data too much and noise that should be "pixel" level noise is spread over couple of adjacent pixels and noise grain increases in size. Best to use is Lanczos interpolation as it preserves much of noise signature.

    Second thing that can create background like that is excessive use of denoising algorithms based on similar smoothing of data like bicubic interpolation. Selective gaussian filtering and such tend to create such large grained background. Whenever denoising algorithm fails to detect background star it will make it into a bit brighter "blob" rather than smooth it out.

    Third thing that can cause such background is using scope that is not diffraction limited, or oversampling a lot on star field that is dense. In this case, even if you have very good SNR star image will be "smeared" over larger number of pixels. If you oversample a lot (like in poor seeing) - most background stars will look like that, instead of being small concentrated light points and background will contain large grain structure that will be composed out of blurred stars rather than blurred noise (as in previous two cases).

    To get nice "high fidelity" background (I actually don't mind a bit of noise in background as long as noise looks right - pixel to pixel variation and very small variation in intensity - small scale grain) - you need to avoid things that alter background.

    Use good alignment resampling algorithm - Lanczos 3/4. Don't use aggressive denoising that will blur things out. Try to match sampling rate to scope / seeing as best as you can. And, yes, finally under these circumstances more data will certainly make background look smoother and better.

    More data will not make large scale grain go away if it is caused by things that I have listed.

    • Like 2
    • Thanks 1
  8. I don't think they all purchased high end scope with sole purpose of using it as guide scope.

    Many people with high end gear started their imaging with simple scopes like ED80. After upgrading to more expensive imaging scope in that class (by class I mean focal length primarily), ED80 remained (they could not part with it or be bothered to sell it) and it was relegated to guide scope role.

    There are people that can afford better guide scopes. While scope itself might not bring anything in terms of guiding precision - they justify expense by build quality of such item. Nicer focuser, better stability due to CNC tube rings and so on.

    After some time spent in imaging hobby, you start appreciating good guiding (and here I don't mean round stars, I'm referring to guiding your mount as good as it can be guided) as it has direct impact on image quality (sharpness), so you start upgrading your components in order to get best guiding you can have - and this is why people use good guide cameras (and also for similar reasons as above - good guide camera can double as good planetary cam, good EAA/EVAA camera and such).

    • Like 2
  9. 3 minutes ago, alacant said:

    I don't think we have a choice.

    I believe you do. There are a couple of components that you can adjust in average OAG setup.

    For example - my OAG has mirror stalk that can be pushed in or pulled out. We use it to position prism in relation to sensor. Sensors are rectangular (most of them) so you can choose where to position prism - closer to optical axis, further away, etc ...

    You can also put certain spacers in optical train, and you can choose where to put them. If you need to provide exact distance between field flattener / coma corrector - opt to put spacers "before" OAG (so that OAG is closest to sensor).

    Sometimes you just don't have an option - like if you use integrated systems where filter wheel and OAG are single unit, or even cameras where you have all things prefabricated to fit together. But in this case you can understand before purchasing such items if you are going to stop down your OAG or not depending on scope you will be using it on.

    • Like 1
  10. 3 minutes ago, msacco said:

    One question I can't understand, I see the image being flipped often like that, when it happens and why? ^_^

    It's got to do with people programming different programs. Two different conventions on coordinate system orientation.

    In "normal" math we are used to X axis being to the right and Y axis being to the up (positive values increasing). Screen pixels work a bit different - top row of pixels on screen has Y coordinate 0 and it increases "downwards" (next is row 1, then row 2 and so on) - so there is "flip" of Y coordinate.

    If one simply loads file (that should be row 0 then row 1 then row 2, etc ...) and displays directly on the screen you get one vertical orientation. If one loads file into "math" coordinate system - it will be reversed in Y direction.

    How software operates depend on people who programmed it - some use math coordinate space and other just "dump" rows onto screen - hence Y flip between programs - but it is not a big deal as Vertical Flip operation is always present and it is "non destructive" - it does not change any pixel values it just reorders them (same goes for horizontal flip and 90 degree rotations).

  11. image.png.010ac74822f189c6b990944aef8a89ca.png

    Here is another attempt - processing in Gimp. I did "custom" vignetting removal - kind of works. Here is what I've done:

    Open stacked image in Gimp 2.10 (use 32 bit per channel format). Copy layer. Do heavy median blur on top layer (blur radius of about 90px or something like that) - leave next two parameters at 50% and use high precision.

    Set layer mode to division, and put layer opacity to something low like 10%. Merge two layers. Now just do levels / curves and you should get fairly "flat central region.

  12. It did look rather whitish :D

    image.thumb.png.8adca6a9f32cc484f4dcb5224ad45910.png

    I used super pixel mode as debayering method:

    image.png.824831f8a1e2f59b3811e5538337c0af.png

    And these were stacking parameters:

    image.png.d8ad4135046503304752df4c6b52461e.png

    After that I loaded image in StarTools (trial - I don't use it to process images otherwise but it can attempt to remove vignetting and that is why I used it), did basic develop, remove vignetting and color cast and did color balance.

    Took a screen shot of the result at 50% zoom (trial version won't let you save image for further processing).

  13. 30 minutes ago, msacco said:

    Maybe someone can get anything decent out of it? (by my level of 'decen't, I mean bad result for others, but something that's still cool to see).

    I don't like working with uncalibrated data, but here is quick attempt:

    image.thumb.png.30363118039286bc017a1bbf8f106173.png

    Lack of flats really show - there is enormous vignetting that is hard to remove. This is quick process in StarTools after stacking in DSS - I tried to wipe the background and remove vignetting, but as you can see, it did not work very well, still I kind of like the effect.

    There is some nebulosity showing - there is something to look at :D

     

  14. @alacant

    This is quite confusing now, as ASI294 has RGGB bayer pattern according to ZWO website

    image.png.30e8f97f07642a7c764a0949b97e7f89.png

    Which means that it can be either RGGB or GBRG depending on software used (if it flips image upside / down - screen or image coordinate system).

    Fits header gives wrong bayer pattern (but it does note that some software should use vertical reversed order).

    image.png.8f3f06d1622f684a7cf68ff60a86b5e5.png

  15. 3 hours ago, msacco said:

    Maybe this would be of any use? https://www.astrobin.com/415559/?nc=user

    When posting this image he also said: here are the RAW fits images of this image:

    https://drive.google.com/drive/folders/1VgMbdtTBx4sEQuKP6iMWjYk4q8oK6uWE?fbclid=IwAR2nhiIGYdOed3km79F0BHDiDus1O1nHiGQ944-eHYFJrX2xfd7FKKoi6U0

    So I simply downloaded all these images and stacked them up, from the data I could see that he's using ZWO 294, thought I don't really have more information.

    Ok, I see your point. This image has very low saturation for some reason. It can be processed, but it requires usage of more sophisticated tools (or quite a bit of fiddling with white balance and saturation in image processing).

    Color is there but it is so low intensity that you need to do saturation quite a lot.

    image.png.c3a18a801d88cce0f97b3f2bc4d24725.png

    This is just a bit of fiddling around with saturation and levels in Gimp after stacking in DSS. In order to process image as this, you need to be able to wipe the background - there is quite a bit of LP gradient. You also need to do a proper white balance on the image as green is too strong, etc ...

    I think above image posted on facebook was probably processed with PixInsight or similar tool which offers all of that functionality.

  16. 29 minutes ago, msacco said:

    He shared his RAW files, and I used Deep Sky Stacker to stack up the images, and this is the result:

    Let me get this straight, you used someone else's data to stack an image right?

    Do you know what sort of raw data did you get from them? Raw subs? Already stacked image in LRGB? What sort of camera was that person using?

    I would need a bit more info before I diagnose why you are getting B/W result. It is entirely possible that you applied wrong workflow based on their data, but we need to check that.

  17. 1 minute ago, Singlin said:

    Do I need to do darks for every exposure legnths?

    Depends on what sort of sensor do you have.

    If your sensor has proper bias (and most CMOS sensors have some issues with bias signal) you can use only longest exposure length darks and do dark scaling.

    Dark current has linear dependence on time (unlike temperature where there is exponential dependence), therefore you can take 10 minute dark, remove bias and divide it with 2 to calibrate 5 minute light with it. Stacking software should be able to do this automatically for you.

    With CMOS sensors and their bias issues, you can't subtract bias from darks (it changes between exposure lengths for some reason, or even between power cycles of sensor if there is some sort of internal calibration) and there fore you can't scale darks because bias does not depend on exposure length and temperature in predictable way.

    If you have CMOS sensor, then it is best to do a set of darks for every exposure length you use. On the other hand, depending on the scopes you use and filters - you should not have too many exposures that you use. In principle you can get away with one exposure length per scope, maybe two if you run a risk of over exposing star cores - one "proper" exposure length and one short "filler" exposure length.

    Exposure length will largely depend on your conditions and capability of your mount - go with longest exposure length that you can manage to guide/track without any trailing issues.

    How short with exposure length you can go depends on how high your read noise is - lower read noise means you can use shorter exposures as main exposure. To be precise you need to compare read noise to other noise sources - when it becomes dominant noise component - increase exposure length.

    There are other considerations for exposure length - how much data you want to store per session (shorter subs - more of them, more data), and how likely is that you are going to have to discard a sub (wind gust, earthquake, cable snag, sudden flash of light in direction of scope - passing airplane or car headlights, etc ...). It is better to discard 1 minute sub (1 minute lost) than it is to discard 30 minute sub (that is 30 minutes of imaging time lost).

  18. 12 minutes ago, msacco said:

    Wow, amazing explanation, thanks a lot for that! I had to read it a few times to stack(Ba Dum Tsss) and process all the information here. A few things I'm still find kinda confused about:

    How do I take the bias frames before trying to even image my target? How do I know which ISO and exposure time to use? Same goes for dark frames.

    The logical process to me would be setting everything up, trying to image the target, get all the settings correctly, and only then get all the bias and dark frames.

    Also, how much frames do I approximately need to take for each one?

    For example, let's say I have 20 light images, what is the image ratio for the other frames?

    As for the colors, I'm still not quite sure I understood how it should work, I have the raw images of andromeda now, I believe that bias/dark/flats could improve the image further, but that doesn't seem to have any assistance color wise.

    How do I go about getting color on the stacked image?

    Thank you so much for the incredibly detailed answer again, it was very very useful, and helped me understand how I should get it started.

    Astro photography is a bit different than regular photography in the way you treat your settings.

    With regular photography you compose your shot, do metering, decide on F/stop, then see what sort of exposure do you need (to avoid or impart blur), set your ISO based on those two, etc ...

    This is because you are taking single image, and you want to get it right in single image.

    With AP you are stacking images, and you need to process your image to get meaningful result. This is because in AP you have enormous dynamic range in your image (or rather captured data). You can have as much as 10 mags of difference in stars in single image - that is x10000 light intensity.

    That is why with AP you don't aim for "good exposure", you aim for good SNR, and you will adjust your exposure in post processing (think histogram manipulations to show what is in shadows in regular image - most of the stuff in AP is in shadows).

    In AP with DSLR, ISO should be always set to one value (each camera has best ISO value for AP), and that is most often ISO800 (some have ISO400 as best, while others might be best at ISO1600). Do a search for your camera model to see if you can find recommended ISO value for AP. This means that ISO will be known in advance.

    Same goes for exposure time - it is not governed by brightness of the target like in regular photography, it is dictated by other factors, and here is a short list of things that impact it:

    - Go with longest exposure that you can manage

    - You should take care not to saturate because of LP (light pollution) or to saturate parts of target. In general you can circumvent saturation of stars and bright target parts by using small set of short exposures at the end of session ("advanced" technique).

    - Depending on your mount / guiding / polar alignment, mechanical issues are most likely to be limiting factor in how much you can expose for. Again this is something that you need to test. Start with fairly common exposure value for DSLR AP and build from there - 30 seconds. If your subs look ok - no star trailing, stars are tight and round, you can try to increase exposure length.

    As for frame number - it is always better to have more. More of light frames, more of darks, more of bias, more of flats and flat darks .... At some point however you hit diminishing returns. We can sum things up by saying to improve things x2 you need x4 number of subs - and you can see how at some point you will hit "it does not make any sense to take more" :D. For example - you do total of 1h of imaging, so yes it is worth doing another 3h on the same night (provided you have time for it) to get twice better result. However, once you've done 4h for that evening, going for another x2 better result would mean three additional nights of imaging the same target - that means weather needs to cooperate, that means x3 additional setup times and doing exactly the same, etc, etc ... Yes, sometimes people are willing to spend 4 nights on one target, but for additional x2 of improvement you would need next 12 nights ... so there is a limit at some point, and that limit will be imposed by you.

    As for other frames, again - depends on your budget, but get at least few dozen of each (20-30 up to 50 of each). Just as a reference, I'm a bit of nut case here, but I do couple of hundred of calibration frames each.

    If you have some raw subs of Andromeda galaxy and want to get a grip on color in AP, then simply use data that you already have to learn. Stack those subs without any calibration frames - make sure you tell your stacking software to treat your RAWs as color (bayer matrix) images. This will produce color image - then import that image in some processing software (like PS or Gimp) and try to "expose" it properly - histogram stretch, apply some color correction and add some saturation. You should have color. You can always take stacked image and attach it here (maybe this thread or new one) and ask for help - people will process the image to show what you have captured (you will no doubt see different renditions of your data) and I'm sure they will explain how they got their results so you can try it yourself.

    One important thing I missed in previous post - Flats need to be done without altering anything on your scope, so it is best to do it after lights while camera is still attached and you have not moved your focus. If you change something there is a good chance that you won't be able to get matching flats. Focus change, camera rotation, some dust settling or being dislodged - all of those can disrupt proper flat calibration.

     

    • Like 1
  19. 1 minute ago, Anthonyexmouth said:

    to do the darks I had the camera cap on and sensor down on a table, thought that would help stop light leakage, must have been wrong.

    i'll pop the camera off later and redo a few darks. 

     

    That is how I do it, never had any issues.

    What sort of table is it? Some materials are transparent to IR - like plastic. That is why there is a risk of just using plastic cap on sensor. If table is plastic or glass type - that won't help much. Look for thick wooden table for this. Or better yet put a piece of aluminum foil over camera cap.

    If aluminum foil can stop someone messing with your brain waves, it can certainly fend off some IR radiation :D

     

  20. 35 minutes ago, Anthonyexmouth said:

    L_2019-07-21_00-30-12_Bin1x1_240s__-10C.fit 22.31 MB · 0 downloads D_2019-07-20_10-29-37_Bin1x1_240s__-10C.fit 22.31 MB · 0 downloads

    heres a light and dark to start with.

    its very possible i screwed up the darks being new to the cooled cmos scene. 

    Everything appears to be in order but something is seriously wrong :D

    Header information is as it should be. Both show same temperature, gain settings, exposure length - one is labeled light frame, other is labeled dark frame (by APT).

    However, data in dark suggests it is more like flat file rather than dark. How did you take your darks?

    Look at this:

    image.png.176a011bcf7c774364b986f587c9a39a.png

    This is histogram of light frame in full 16bit range - exactly what you would expect - it's bunched up to the left but not clipping, it has 3 peaks because it is OSC camera (slightly different sensitivity to sky background between R, G and B ) so it's looking like proper light frame.

    Now this is histogram of the dark frame - same range of values (16 bit or 0-65535):

    image.png.0729a78312ce9cc22defa19ee4d1a3eb.png

    This should not be like that, this is how flat histogram would look like (under funny light source, with quite a bit of vignetting).

    This would mean that you either mistook flat for dark, but I don't think so, as stretched dark does not show usual things flat shows - dust shadows and proper vignetting. It shows very strange type of vignetting, so I'm guessing camera was not on the scope when you took darks. Here is stretch of dark to show what sort of vignetting it has:

    image.png.a4dc3bd93173d5d33d161beaa96fae6d.png

    So my guess is that you took darks with camera detached from the scope, but you had some sort of light leak - camera was not properly covered, and judging by histogram it is most likely IR type of light.

    This is a good news. Your lights are fine, you just need to redo darks properly and everything should be fine.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.