'How We Sharpened the James Webb Telescope's Vision From a Million Kilometers Away' (theconversation.com)
- Reference: 0179820106
- News link: https://science.slashdot.org/story/25/10/18/0537217/how-we-sharpened-the-james-webb-telescopes-vision-from-a-million-kilometers-away
- Source link: https://theconversation.com/how-we-sharpened-the-james-webb-telescopes-vision-from-a-million-kilometres-away-262510
"We can finally present its first successful observations of stars, planets, moons and even black hole jets."
> [AMI] was put on Webb to diagnose and measure any blur in its images. Even nanometres of distortion in Webb's 18 hexagonal primary mirrors and many internal surfaces will blur the images enough to hinder the study of planets or black holes, where sensitivity and resolution are key. AMI filters the light with a carefully structured pattern of holes in a simple metal plate, to make it much easier to tell if there are any optical misalignments. We wanted to use this mode to observe the birth places of planets, as well as material being sucked into black holes. But before any of this, AMI showed Webb wasn't working entirely as hoped.
>
> At very fine resolution — at the level of individual pixels — all the images were slightly blurry due to an electronic effect: brighter pixels leaking into their darker neighbours. This is not a mistake or flaw, but a fundamental feature of infrared cameras that turned out to be unexpectedly serious for Webb. This was a dealbreaker for seeing distant planets [3]many thousands of times fainter than their stars a few pixels away: [4]my colleagues quickly showed that its limits were more than ten times worse than hoped. So, we set out to correct it...
>
> We built [5]a computer model to simulate AMI's optical physics, with flexibility about the shapes of the mirrors and apertures and about the colours of the stars. We connected this to a machine learning model to represent the electronics with an "effective detector model" — where we only care about how well it can reproduce the data, not about why. After training and validation on some test stars, this setup allowed us to calculate and undo the blur in other data, restoring AMI to full function. It doesn't change what Webb does in space, but rather corrects the data during processing. It worked beautifully — [6]the star HD 206893 hosts a faint planet and the reddest-known brown dwarf (an object between a star and a planet). They were known but out of reach with Webb before applying this correction. Now, both little dots popped out clearly in our new maps of the system... With the new correction, we brought Jupiter's moon Io into focus, clearly tracking its volcanoes as it rotates over an hour-long timelapse.
"This correction has opened the door to using AMI to prospect for unknown planets at previously impossible resolutions and sensitivities..." the article points out.
"Our results on painstakingly testing and enhancing AMI are now released on the open-access archive arXiv in [7]a pair of [8]papers ."
Thanks to long-time Slashdot reader [9]schwit1 for sharing the article.
[1] https://jwst-docs.stsci.edu/jwst-near-infrared-imager-and-slitless-spectrograph
[2] https://theconversation.com/how-we-sharpened-the-james-webb-telescopes-vision-from-a-million-kilometres-away-262510
[3] https://arxiv.org/abs/2308.01354
[4] https://arxiv.org/abs/2310.11499
[5] https://github.com/louisdesdoigts/amigo
[6] https://en.wikipedia.org/wiki/HD_206893
[7] https://doi.org/10.48550/arXiv.2510.09806
[8] https://arxiv.org/abs/2510.10924
[9] https://www.slashdot.org/~schwit1
How is this anything new? (Score:2)
There is a long history of getting more detailed photos. The "did X and added AI" without comparing to the best or even middle-tier ways without AI needs to be in the research.
1. Take lots of photos of the same shot without moving the camera
2. Repeat step 1 for a lot of overlapping images
3. Average photos from step 1
4. Stitch together the overlapping photos from step 2 and 3
[1]https://www.nasa.gov/solar-sys... [nasa.gov]
NASA’s Curiosity Mars Rover Snaps Its Highest-Resolution Panorama Yet - Jet Propulsion Laborato
[1] https://www.nasa.gov/solar-system/nasas-curiosity-mars-rover-snaps-its-highest-resolution-panorama-yet/
Blinders (Score:2)
Effectively what they did was put blindiers around each pixel to prevent light bleeding into the next one.
Granted, it was a finely tuned blinder, but that's what they did.
How do they know... (Score:2)
... whether the AI is actually showing what's there or simply hallucinating data based on something similar it learnt?
Re: (Score:3)
> ... whether the AI is actually showing what's there or simply hallucinating data based on something similar it learnt?
It’s a “simple” filter that just unwarps image bleeding into adjacent rows/columns. By using a known pattern of dots, it learned how to optimize the image, but not what an image looks like and indeed it already did that in many other ways which is why the filter was there in the first place. It may not have allowed for actual increased sensitivity on real objects as a result, but they verified it by making observations not possible with the original filter.
Re: (Score:3)
(Though I don't understand much,) I would imagine the deconvolution would be normally done through inverting the point spread function at each pixel. Most likely they need an iterative solver and it can be cumbersome to compute on large images. So instead they let the machine-learning model fill its weights on known cases. In the end it's still matrix algebra, just a linearized method that less expensive computationally.