Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

ONIKKINEN

Members
  • Posts

    2,385
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by ONIKKINEN

  1. So how about a test then? Take some darks with a bunch of bananas next to the camera and see if there are more hits? Forecast is cloud and more cloud for the foreseeable future so i might just try that over the holidays, and i already have the control image to compare to. Mushrooms are another option, they are still radioactive from Chernobyl fallout here in Finland. Occasionally some are tested to be slightly above the EU suggested limit for Cesium-137.
  2. Here is one of mine: I think these are from the sensor window as they are quite small in size. The next glass element in my scope is in the coma corrector so probably too far away to make noticeable shadows, unlike with yours since you have that filter in between.
  3. I think your flat looks normal, ghost views of the central obstruction included. All my flats have those and always have, so i dont think there is something to worry about with that. I'll post one of my own flats in a bit, which also has those dark spots resembling the secondary.
  4. They look like newtonian dust motes with how the central obstruction also produces its own shadow, but i do agree they are unusual. Using this tool might give an answer on where in the imaging train this came from: https://astronomy.tools/calculators/dust_reflection_calculator Lack of flocking might make things worse, although never did see it quite this bad with my newtonian before flocking. Lights from a nearby house could also make things worse. Lots of "could" options here, testing is required to single out the issue, or a multitude of issues that all work together in exaggerating the motes/reflections.
  5. Yes, just simple division of image one / image two. Here are some examples of what the results look like, with the false colour rendering mode and histogram view mode enabled. First a known good flat divided by another known good flat of the same image run: Then i cropped 8 pixels off the left edge of flat 1 and 8 pixels off the right edge of flat 2, effectively shifting the center of the optics by 8 pixels, or 30,08 microns. Now there is a little something, maybe. Edges of some dust motes may be visible, and calibration could fail with this one. Only 30 microns of movement! Then as the last one, i cropped 16 pixels from both from opposite edges and now the flat is completely ruined with dust spots readily available:
  6. I took 27x 3600s darks a year ago to see how many "hits" the sensor got and stacked with maximum, stretched result below: There are a number of streaks in the image, whether cosmic or local in nature is not something i know the answer to, but could be that some of them are cosmic rays while most likely local radiation hits. Taken indoors in an apartment building, so thick concrete walls all around.
  7. I cant see any issues in the data itself, darks and darkflats look good and match in offset to lights so not a data issue i would say. The only thing that an analysis of the fits headers reveals is that you had very slightly different focus positions between the flats and lights, 54 steps to be exact. Not sure how much movement that is with your focus motor and focuser, but doesn't sound like a lot and is probably not at all an issue. After all focus moves in and out as the night progresses and your tube contracts as it cools and this would almost certainly be a larger movement than whatever the length of 54 steps is (and flats still should work regardless of small thermal expansion during night - at least never had issues with that myself). Give the pixel math flat test a try, at least you'll get a definitive answer on whether its scope stability related or not.
  8. I'll take a look at the data later today and see if there is something that comes to mind. I think you are right to assume that something is moving, but figuring out what is not easy. One test you can run to confirm this hunch is to take flats at different tube orientations and then divide one flat with another using pixel math. For example take one flat with the tube pointed at the zenith, then one with the tube horizontal. Divide the zenith flat with the horizontal flat and see what you get. If everything is rock solid you get an even image with no gradients or blotches of any type as the pixel math result. If things are moving you get some kind of pattern as a result and the suspicion is confirmed.
  9. Found a thread from a couple months ago where this Ha to RGB thing was discussed:
  10. The pixel math route of adding Ha to RGB is really not very good, or easy to do for that matter. More ways you can make a mess than a result you like i would say. But if you have to, it happens by using the max (x,y) operator where you take the max value from 2 images combined to one result. Noise levels need to be similar and gradients need to be eliminated to have a chance at success, or you just add the noise and gradient from one image to the other (and the stacks need to be linear fit to each other). If PixInsight is not available then Photoshop or Gimp is what i will recommend for this.
  11. Thanks for the replies, I think i will try to get some more data, and i think it can be improved significantly if i just get a decent night in as none of the 10 hours in this stack is what i would call good data. From good darkness and no transparency issues with passing clouds i reckon a 5 hour night would beat this dataset easily, so there is room to improve without spending too much time (of course no such night is guaranteed).
  12. Thank you! Its a very busy galaxy for sure, or maybe it just seems that way because its relatively nearby? (no expert either) I think we are lucky to have a face on spiral galaxy like this one so close to us.
  13. Somewhere around 10 hours with an 8'' f/4.4 newtonian + TeleVue Paracorr and a RisingCam IMX571 OSC camera: Imaged over 3 nights, 2 of which were in October under high wind and generally not so great skies and last night with rogue clouds on every other sub so not the best dataset to begin with but I'll take what the skies give. This is a kind of test image to gauge whether i need more data, and how much if i do when i one day blend this to a broadband image to make a nice RGB+NB combo. Broadband data is sitting at around 13 hours, but would like to get more of that too. Goal is to finish the target as soon as possible but the M33 imaging season is quickly closing so might need to keep going next year. Not the end of the world since the first nights of the project have already celebrated their one year anniversary so whats another year more.🤪 Enough red bits showing up already or more time in the oven? Thoughts, comments very much welcome. -Oskari
  14. I used to run my AZ-EQ6 (and EQM35 before it) with a USB cable plugged to the bottom of the handset, but i switched back to a EQ mod cable because there were occasionally some communication issues with the mount. The mount would report to nina that its 3 degrees off according to plate solving, which it was. But the next slew went 3 degrees further off target, and the next one 3 degrees more again and again. This didnt happen every time, but switching back to EQmod made it disappear completely. This seems to be a rare issue with no mention online, so could be anything but everything points to the issue being some sort of USB gremlin. So i would not recommend the USB route, at least not through the handset.
  15. I could see an automated preprocessing suite happening in the near future. Something that calibrates, stacks, gradient extracts, deconvolutes, colour calibrates and stretches an image automatically. All of those have an automatic tool available today, just not in the same package so not a stretch at all to believe something like this is around the corner. I think only positives will come with a tool like that. The hobbyist that just wants to observe the final image with as few steps as possible will be pleased with the ease of use, and the enthusiast that wants to tweak every little thing can just not use it - or better yet, use the automation and spend more time tweaking and fine tuning other things in the image.
  16. Ran BXT in correct only mode with AI2 and the new AI4 on the most cursed stack i could find at the moment, side by side below. I dont think i need to point out which image was with which AI version... And the original uncorrected version: In short, amazing. The new AI has also managed to merge many of the unfocused diffraction spikes (but not all it seems), which was one of the issues i had with the old AI.
  17. Processing is probably more than 90% of the end result, if not more. I think a person skilled in processing can come up with a better image through a kit lens and a DSLR than a complete beginner with 10k € invested in their kit. Having good data helps a lot, but there is still much more to learn in processing than there is in image capture, and i think the image capture part is something most beginners will get the hang of in a couple of years, to a point where only equipment upgrades bring a noticeable improvement.
  18. Hi, and a warm welcome to SGL! For conventional photography, especially for contests and such i do agree that generative AI is a potential issue and one the photographer must mention was used or simply not use at all if it was against the rules. However BXT is a an AI tool for a very specific purpose and does no additive or generative work on the image and every decision it does was based on measurements from the input image, so no new information is created out of thin air like with other AI image processing tools like TopazAI. If you would like to know more about how BlurXterminator works and how it is able to make the decisions it does, check out this interview: https://www.youtube.com/watch?v=6hkVBnYYlss&t=1s An interview of the new AI4 version of BXT can be seen here: https://www.youtube.com/watch?v=nLyZGzT8T5c Not sure how a distinction between a natural and a processed image could be made, since every astrophotograph must be processed before it can be presented (linear data is full black basically). How well someone manages to process their image is for the viewer to determine (and voice opinions on) in my opinion.
  19. Looks pretty damn good to my eyes. No sign of any issues that a lack of flats would often bring, so i guess Graxpert works really well for that role.
  20. This sort of circular outline of the fully illuminated circle being present in an image indicates that the flats do not mechanically match the lights - as in something between the sensor and the mirror moved during the night. There could be many issues that cause this, like focuser slop, secondary slop, primary slop, or the tube itself deforming ever so slightly under gravity in different orientations. The filter thing you got right, needs to be close to the sensor, but if this circle shows itself again you have more work to do in figuring out which part of the system is not stable.
  21. Looks amazing already, this will be spectacular with RGB! Admirable dedication for one project too, patience really is a virtue in this hobby with the weather being what it is.
  22. I briefly took part in a collaborative effort last spring and found out that its really not my thing even though my contribution was not much more than sharing data that i had already captured. Had no real motivation to process the shared stacks in the end because it really wasn't my own data, even if that stack is the best data i will most likely ever get to touch again. Maybe someone can relate, maybe not. In any case, the process for that project was that everyone shared calibrated data (16-bit for RGB, 32-bit for Ha, for space saving reasons) in a shared google drive folder from which the person who was responsible for stacking got all the raw data. If i recall correctly the stacking and processing part took several days worth of work because there was just so much to go through (i think a few hundred GB, of which 70gb was mine because of short subs). Sharing data this way is not too difficult from a convenience point of view, since everyone gets 15gb of free google drive space to use and you can just delete the data from your drive once the stacker lets you know they grabbed the data. Obviously this method requires that everyone participating has good upload speeds and an uncapped data plan, but it gets the best image out of the input data unlike just stacking masters which is less efficient than stacking each sub to one master.
  23. Also remember to tick the autostretch button next to the histogram, its the lightning bolt looking symbol. That way you can see something other than black and maybe be able to find the Moon by following the glare it produces when nearby. You also need to see stars when focusing, and with the autostretch option ticked you should see out of focus stars as large disks and then be able to find focus.
  24. Teleskop express is very good, as is Astroshop. Both have oddly high pricing for certain items sometimes, so best to look from many sites before purchase. For pricing FLO is actually still very much competitive with EU suppliers even when i have to pay 24% Finnish VAT + 4.2% when importing (4.2% is EU wide, i think). Not worth it for every item but one shouldnt overlook FLO as an EU customer. My last (think it was) purchase from FLO was the Paracorr for my newt, which was 50-150€ cheaper than EU suppliers even with all the "extra" fees.
  25. Without guiding, not really. But my reasoning here is that a beginner will have their hands full learning the ropes for a good while, and the first few images will probably suck anyway (i know mine did, still do sometimes) so the lack of guiding is at first just one of many things to improve on. For a camera we are at a budget where compromises have to be made. Second hand DSLRs are quite cheap and will get an image of a DSO, even if dedicated astro cams are much better. If they got a 120MM for guiding they could also this for lunar, so little money wasted in the end. Vlaiv made a good estimate on a starter set above, which is a bit over budget but substitute the 533 to a DSLR and its not so bad. The Quattro 6 might be a decent option just because it comes with a coma corrector, although its probably not the best lunar scope.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.