Jump to content

ONIKKINEN

Members
  • Posts

    2,533
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by ONIKKINEN

  1. I kinda like the "dreamy" look when zoomed in with the sharp foreground and the distant slightly defocused stars. Gives the image a sense of grand scale IMO.
  2. If its just a uniform colour of light pollution over your images it goes away with just colourbalancing. Have not used Affinity but in Photoshop you can do this with either levels, curves, the colour balance tool or all 3. Or the auto-colour and auto-tone options. Try to bring your histogram peaks together like this: Now this isnt "done" yet because its far too blue but that's the gist of it, to be honest i never learned to do this properly manually as there are tools that are infinitely better than this available for free. Dont be afraid of the beige cast on images, its very much normal unless you're imaging from a very dark site and its something that just goes away with colour balancing. But it would be better to start using dedicated astronomy software as soon as possible as this kind of manual fiddling with sliders will just not get the same result as a tool meant to do that. There are some really simple and free software to choose from that do this kind of thing much better. For stacking: Deep sky stacker and SiriL. Deep sky stacker preprocesses and stacks your images and is very simple to use for even beginners. SiriL is maybe a bit less simple to use but you can preprocess, stack and process the image using it. You could also stack in DSS and then process the stack in Siril, and then do final touchups in Affinity or PS. SiriL has a gradient removal tool that attempts to fix gradients if your image has them. From B7 or 8 i would say its almost guaranteed you do have them so this step is quite useful or in many cases absolutely necessary to get a presentable image in the end. Once the gradient is fixed you can run the Photometric colour calibration tool which looks at your image, detects which stars are in your image and then looks up their colours from a photometric catalogue and attempts to match your image colours to the catalogue colours. It works extremely well for most types of images. There is also a manual colour balance tool where you just select a box that represents background and select a box which represents a white spot, like a white/yellow star or in the case of a galaxy you could select the entire galaxy. The filters you are thinking of are situation dependent. For emission nebulae, like M42 you will benefit from narrowband filters such as the L-extreme. For broadband targets, like galaxies you do not benefit from these kinds of filters as there is no specific wavelength you can pick and choose to get the best result. For galaxy imaging the best choice is to just deal with the light pollution and capture through it, with no specific filter. If the conditions are really bad - like inner city bad you might want to use a broad-ish light pollution filter like an L-pro, or some other similar filter. But you should know that using this kind of filter removes the possibility of getting a true colour galaxy image in the end as you have cut off a big chunk of the actual signal along with the unwanted light pollution signal. I would advice against such a filter for galaxy imaging, but for nebulae the L-extreme or similar would make a big difference.
  3. I was thinking of maybe getting one of these, or a similar size from other brands as a second scope to keep me busy while my main scope is imaging. The weight is a bit concerning though as i would want to mount it on some low effort manual alt-az mount like an AZ4 or AZ5. Difficult to tell without handling one in person but would you say its a bit too optimistic to think i could get away with that or better to go for maybe a 100mm one?
  4. On the contrary, i think its becoming cheaper and cheaper every day. DSLRs are better and cheaper than they were 10 years ago, but dedicated astronomy cameras are also cheapening all the time. Right now there is some healthy competition in the camera market and you can get some very good cameras for not that much money. I wasn't in the hobby 5 years ago but looking at what some older models (like ATIK cameras) cost now it looks like the price to sensor area ratio has improved a lot. Also nowadays we have many cheap OSC narrowband filters like the Optolong L-extreme that bring the entry cost to narrowband imaging down by about 2000eur. Of course OSC+ duoband filter is not the same as mono camera + 3nm narrowband filter set but its in the same category at least and for a fraction of the price. Basic scopes and mounts are already pretty cheap and i doubt competition can bring this price down that much. Acquaintance of mine is into normal photography and he was amazed at how cheap some kit is, i mean you cant really get even a half-decent lens for 300eur but you can get a 130PDS with that, and that's already a pretty good scope. Most tools used in the capture and post capture process are available for free and you can do pretty much anything without dropping a single cent on software, which is not how it could be. Right now i guide my telescope using PHD2, run my mount and camera with NINA using the built in USB connection that all new Skywatcher mounts have, preprocess and stack using Siril. All of these tools are free and work as well as the paid options. Sure the best software for processing (Pixinsight) costs a lot of money, but IMO beginners wont be needing that for a long time. You can get a pretty decent setup for astrophotography (one you wont immediately regret) for as little as 1000eur, but would be better to have a bit more. This kind of price is in my opinion not outside most peoples reach. Of course there are differences in countries for what you can realistically afford as a worker but i think this applies to many of the most commonly seen nationalities here. If i think about my previous hobbies i dont think astrophotography is really that expensive compared to them. I used to ride/tune/tweak/upgrade motorcycles and it was around the same cost, i was really fanatically into gaming PCs a while back and of course this costs an arm and a leg as well. Even non-hobby items like smart phones and other consumer electronics are in this kind of pricerange. So if one can imagine owning a decent new-ish smartphone, a gaming console or PC then they could just as well own an astrophotography setup. All of my kit has been bought on a warehouse worker salary, which i think most will agree is on the lower end of the salary ladder and not something you'd expect the typical astrophotographer to do. I started with an Astromaster 130 because it was 300eur, which was an amount of money i could lose and it would not be a disaster. Of course its not a good scope but it was good enough for me to want better and got me hooked so it did its job.
  5. I had the exact same thing happen with DSS, all of my images were stacked on one trailed image and it looked a bit like yours. Was going mad with this and tried lowering the stack best % setting and did not work. After going through all the subs visually i found that the highest score frame was the one bad frame. Since then i make NINA write some statistics of the sub to the filename itself, so that i dont have to visually inspect a thousand fits files. Along time and date, exposure time and the usual stuff i have NINA write guiding RMS error from the duration of the sub, star HFR and number of stars in the frame. Without looking at the subs themselves i can see a few obvious outliers in the below list and i just delete them without thinking about it too much. This removes most of the obviously unsuitable images before stacking.
  6. This is what i got from a quick composite in Siril and some stuff in PS: Like @AKB above i think your focus is a bit soft, or some other issues make the image soft. Stars are 5'' FWHM which is definitely too high for this kind of tight crop high resolution work. Either focus, guiding, seeing or all 3 need to be looked at to get to the root of the issue. Colour is there, but i think you would need to invest a lot more integration time to ease the noise in the image and make it easier to pull out. Up to you to decide whats enough and whats good enough, but i would try to get some more. Also, you had some serious rotational issues in your capture that means the already small image has to be further reduced in size: You would need to spend some time making sure you have the same camera rotation in each session if you dont want to have to further crop the image. You also have many hot and cold pixels left in the image, which i find a bit strange for a stacked image. Either you did not dither, darks did not remove them, or both. Not a huge deal, but since you were wondering about calibration thought i would point that out.
  7. The necessity to do a brutal stretch is not a problem, but it is a bit of a problem in this case if you use photoshop for all processing. I would recommend doing some things to the stack in Siril first and then moving the partly processed file to photoshop for the final adjustments (which is often most of the work). You can do a background extraction if necessary, photometric colour calibration and then stretch the image to what you like in Siril. But this has nothing to do with the star issue, just pointed out that with files that need a lot of stretching it can seem "wrong" with the way you need to do it in PS and there are better alternatives. The star detection issue is hard to advice on without seeing the raw frame itself, but i will guess it has to do with sampling rate/star size and so DSS doesn't count the stars in your images as stars because they are spread over too many pixels. Sampling rate is just a part of this though, seeing and your guiding performance will also effect this. Try the super pixel debayering setting in DSS and see if it detects more stars: If that doesn't work, could you attach one raw frame here so we could have a look?
  8. SpaceX Falcon 9 second stage, but launched in 2017: https://www.space.com/spacex-rocket-stage-deorbits-over-mexico All second stages that have not been shot to escape trajectories go through this, but most re-enter somewhere they are never seen because the Earth is mostly ocean. The second stage has many metal parts built for high temperatures that take a long time to burn in the atmosphere so this sort of lightshow can remain for a while (like all engine parts which generally benefit from not melting in use, also means they resist re-entry shock heating for a while). Some kit can make its way to the ground after re-entry: https://www.theverge.com/2021/4/2/22364582/spacex-rocket-debris-falls-farm-washington By the way this is the reason why SpaceX launches starlink to initial low orbits, because this second stage has been nothing but space junk for 5 years, which the starlink satellites would also be if shot to initial higher orbits.
  9. This is how my stuff is stored if i expect to have suitable weather so i can leave right away. Everything is pre packed and i dont have to check anything and can just haul the gear to my car and set off. I push some of this under the bed usually though, but everything is packed as in the picture. I live in a tiny flat so floor space is not a luxury i have but what little there is left i have dedicated to astronomy gear (naturally). Not sure how much all of this weighs, but probably around 50-55kg. I carry these to my car in 2 trips. I am pretty sure i cant have a single piece of kit more and still get away with 2 trips, whatever i buy next will need to be hauled separately. Its a struggle if my apartment building elevator is broken (sometimes is) and i have to carry everything down from my 6th story flat 😬. But its surprisingly easy to carry what most would categorize as a big scope and a big mount in a "mobile" fashion. Its not grab and go but no reason it cant be done. Some things are prepackaged so that i dont have to set up everything every time, like for instance the imaging train never gets broken. The comacorrector and the adapters needed for backocus stay attached to the camera at all times so i can just plop that into the focuser and get going straight away. I also mark my focuser and camera with aluminium tape to get the rotational position correct to my previous setting (if imaging a multiday target) so that i can just get going quickly. Guidecam stays in the guidescope in the focus position it has so i dont really have to refocus all that much. Just a touch to account for different temperatures. From arriving to my imaging location i am set up and polar aligned within 30 minutes, which is always faster than the scope takes to cool down (well most of the time). For trips where i intend to only do visual its a lot faster and i can drop a lot of this gear but it still gets carried in 2 trips.
  10. The files are in the Pixinsight exclusive XISF format, mind uploading them in normal 32bit .fits format so other software users can have a go?
  11. The Maxfield is not that picky about the backfocus it seems. I accidentally forgot to put a 0.5mm shim in that i thought i needed, but imaged several sessions without and noticed no difference afterwards 😅. Problem with the maxfield is easy to see in the spot diagrams (looking at the RMS radius numbers, i ignore the visuals in the graphs as the scale is arbitrary), the size is very different in the center compared to the edges, which means if one focuses on a star in the center one gets soft stars in the edges but decent ones in the center. If one focuses with image statistics from the entire shot, like NINAs HFR calculations you tend to get a generally soft focus as the ideal HFR value is probably somewhere halfway from the center to the edges. I focus with NINAs HFR values, i find it gives the best result out of the tools i have at hand (no autofocuser) and still this extra coma issue is present. With the other correctors this issue is much smaller. The GPU stays close to 2 micron RMS spot size so i assume that means the image would be flat and focusing on any point is valid. With the ES HR i see that the spot size doubles compared to the center in the edges, but the size of the spot itself is still very small. Paracorr claims to be between 1 and 1.5 micron RMS on an APS-C sized field so on paper its the best. The maxfield is the clear outlier from these and the spot size quadruples between the center and the edge of the field. If the GEO measurement is used its even worse, but i dont know what that means anyway so maybe not.
  12. Looks great! I cant really see any coma in the corners, which is what i expected from the GPU. On the topic of the Explore scientific corrector i thought i would have a look at some spot diagrams, below is what i found. These are difficult to compare since some of them do not state what scope they were tested with but its better than nothing. Explore scientific HR: TS GPU: Televue Paracorr: This is the only graph i could find for the paracorr for some reason. And the TS maxfield 0.95 that i am replacing: Out of these the maxfield 0.95 is obviously the worst, no contest. Between the GPU and explore scientific its more even, but looking at these i would favour the GPU more. The paracorr looks like it would be the best out of these, but the graph is very different looking so not sure how to interpret it. The RMS radius curve would point to significantly lower values compared to the others. Also not really sure what the difference between the RMS radius and GEO radius measurements are. They are quite different between all of these too so not sure what to make of it.
  13. I have a VX8 F4.5 newtonian and up until this point i have been imaging with a TS maxfield 0.95x coma corrector which initially i liked, but using it more and improving my capturing in other parts the coma corrector comes out as a weak link. I dont think it performs all that well up to my APS-C sensor size and really its not that sharp even in the middle so its time to look for an upgrade. The corrector will also be used for visual. For now there is no straight up budget limit and i dont really need to order one right now, but im weighing the different options here so that i know what i am saving money for. The GPU i could buy now, but some of the more expensive ones would have to wait. My Rising Cam IMX571 OSC has 3.76 micron pixels so this is something i have to keep in mind with image scale. Most correctors will need binning, or some other method to half the resolution (like superpixel debayer, splitting etc, when i say bin i mean one of these methods). Top contenders i am thinking of: TSGPU 1.0x, giving 1.72''/p BIN2 TeleVue Paracorr 1.15x, giving 1.5''/p BIN2 Some other correctors that come to mind: One of the 0.73x correctors or the new starizona 0.75x giving 1.18''/p unbinned APM 1.5x corrector/barlow giving 1.15''/p BIN2 Looking at the best subs i have taken so far cropped to half the frame to lessen the effect to FWHM from the poorly performing maxfield 0.95 i have gotten down to 2.5'' FWHM stars, but most of them are closer to 3''. There is still a chance that my mount muddied the shots further since the EQM35 for me has a seismograph like movement in RA so its possible it was not the best possible frame for my conditions. Now with the AZEQ6 (i dont have imaging data from this yet) this problem should be gone. This leads me to think that the 0.73x correctors unbinned would probably be a bit optimistic, as would be the APM 1.5x. Also havent really heard much from the APM 1.5x and the spot diagrams are maybe not as good as i would like so probably wont go this way. The real competition here is between the TSGPU which i have not heard really any negatives off and the Paracorr, which i really haven't heard negatives or positives from since it seems less common for imaging. The Paracorr is also twice the price, but includes a handy top for visual use whereas the TSGPU will require adapters (which i have so no cost). If anyone has first hand experience using the GPU and especially the Paracorr for imaging i would love to hear them, negatives and positives. Also please do point out if my hopes for the resolution i will be getting are optimistic or pessimistic, they are just guesses.
  14. This patch of the sky is desperately in need of vacuuming, that's a lot of dust. Not sure where to look in the image with so much going on, great stuff!
  15. They are actually launched to dangerously low orbit on purpose and that's a very good thing. At a perigee of 210km (where they are launched to) they would not spend long in orbit if there was a malfunction. This low orbit is a no-control failsafe that removes faulty satellites quickly without adding space junk and causing hazards to functional satellites. When launched they check that everything works and only then start the slow orbit raising burns with the on board hall-effect ion thrusters. The orbit raising itself takes weeks because while these thrusters are very efficient, they only provide fractions of a newton of thrust. So in the case of very unusual atmospheric conditions the satellite will lose more velocity to drag than the little thruster can provide. I am actually quite surprised SpaceX is willing to take these precautions. The Falcon 9 would have no trouble sending the stack of satellites to any low earth orbit altitude so they can only lose money doing this. The public likes this though so the PR gains are probably worth it or they would not take the risk.
  16. How long did you wait from bringing the scope inside to taking flats? Dew can take an hour or more to go away. I would guess its the outside of the sensor window thats dewing up rather than the inside, unless your camera is old and the desiccant is really saturated.
  17. For galaxies and other stellar targets if the goal is to get a real colour result: Would never use one as its counterproductive. Well, maybe if i was imaging from a location that has all of its street lights be the old yellow ones, instead of any LED lights. But even then i would cut off a huge chunk of the galaxy signal when using CLS filters. For nebulae maybe, but probably not in bortle 5 or darker. But then again i dont really image nebulae so no i would not use one in any level of light pollution.
  18. Not sure whats going on in your flats, but it looks like it could be dew forming on the sensor? Or could be something else entirely, but heres what i see in your flats using histogram stretch and false colour rainbow rendering to make details easily visible: red: Green: Blue: Flats have a very uneven profile in terms of vignetting. Why is that? I have not imaged with refractors so if this is something thats common with refractors im sure someone will point that out (or if you even used a refractor, but no diffraction spikes so that is my guess). But if i got flats like these with my newtonian something would be horribly wrong. Green looks OK to me but red is square and blue is very off center, dont know what to make of this. The reason why i think dew might be to blame is that in the red channel your image center is actually not the brightest part of the image. Its difficult to see but there are loads of cold pixels that have lower values than slightly outside this area. I think the shape is also somewhat similar to the bright spot in your stacked image. Since the stacked image shows the issue as brighter than should it would mean that the flats must have a darker than should spot in the same place, which looks to be the case. If you had dew during lights as well the center spot would have been calibrated out and both lights and flats would have this shape in them. I am guessing something dewed over while or before taking the flats, but not for the duration of your lights. If its too difficult to see i took some measurements with a 150x150 pixel area selection and took the red values in median ADU: center, should be brightest: 18650, slightly off center: 18700. Im not sure why the other 2 channels look so different, and dont seem to have this kind of issue though. Not sure if dew could somehow only shadow the red channel? Probably not now that i think of it, but its a guess.
  19. I have played around with this as well. I have a script in Siril that calibrates my frames and outputs split mono frames in the end, only thing i need to do is to put the frames in their folders in the working directory (i use symbolic links so no copying). Siril does this really fast and its only a minor nuisance now that im used to it.
  20. You can split the sub into its 4 base components: 1 red 2 green and 1 blue monochrome subs without debayering. Then you see that each of the mono subs are now only 1/4 the resolution or sampled at half the rate compared to the resolution advertised. This is the real resolution captured by the camera and no interpolation takes place during debayering (because it is not debayered). Not sure i would recommend doing this, but if you're bored you can play with this idea and process OSC as you would mono. The downside of course is extra faff during preprocessing and you get 4x the amount of subs to stack as a result.
  21. Sounds like its working well. My particular camera advertises this : -35°C below Ambient under Short Exposure/ -40°C under Long Exposure Time(> 1s) So it looks like this extra waste heat for short exposures issue is known and not some special problem. Could be for my sensor (IMX571) or just this model of camera thats the culprit.
  22. Unless the cooler is not working as intended, no point in doing this. In very short exposures you can have issues with rapid heating with some sensors. My camera can heat up as much as 0.5C when taking very short exposures, like 0.1s flats or biases. I'm not sure if its a common issue across different models but with mine it is. The harder the cooler works the more this effect matters for my camera, so if im taking calibration frames at room temperature with the cooler running at 30 below ambient i may have to leave some delays between shots or just set the temperature a bit cooler than actually needed. Outside when the cooler is hardly working at all, no issues with this. But generally no, there is no point in setting delays.
  23. Sorry, dont know if there is a way. I'm not super technically oriented so im not sure how to go about figuring if this could be done somehow. Maybe some electronics wizard could get the RisingCam to appear as a ZWO product to the ASIAIR but i wont be risking my unit becoming one ⚡.
  24. Well, i wont be putting my RisingCam up for sale any time soon, lets put it that way 😁.
  25. This camera: https://www.aliexpress.com/item/4001359313736.html?spm=a2g0o.productlist.0.0.6f047164JGhOx6&algo_pvid=88c7fc7f-59b2-4b58-9bdc-a75b08237944&algo_exp_id=88c7fc7f-59b2-4b58-9bdc-a75b08237944-0 Would be better in terms of specs in almost every category, and with binning or super pixel debayering you get a similar resolution as the QHY8L. Price is difficult to say, many places have wildly different prices. Astroshop: https://www.astroshop.eu/astronomical-cameras/qhy-camera-8l-color/p,58166 Rother valley optics: https://www.rothervalleyoptics.co.uk/qhy8l-one-shot-colour-ccd-camera.html The Rising cam would be cheaper than the one Astroshop sells but more expensive than the RVO one. Not sure how there can be this much price difference between the 2? Anyway, specs wise the Rising cam IMX571 will beat the QHY8L so not sure i would go for the QHY8L, unless you get it used for very cheap. If you dont want to go for a less known brand you could always get something like the ZWO 294MC, which also will beat the QHY8L in specs. Any particular reason you wanted the QHY8L specifically?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.