Friday, July 7, 2023

Neural Nets and Mysterious Solutions

From a long NY Times story about AI in mathematics, I extract this:

Early during Dr. Williamson’s DeepMind collaboration, the team found a simple neural net that predicted “a quantity in mathematics that I cared deeply about,” he said in an interview, and it did so “ridiculously accurately.” Dr. Williamson tried hard to understand why — that would be the makings of a theorem — but could not. Neither could anybody at DeepMind. Like the ancient geometer Euclid, the neural net had somehow intuitively discerned a mathematical truth, but the logical “why” of it was far from obvious.

I'm sure you have all heard that neural nets can predict certain medical outcomes, like who will die of heart attacks, much more accurately than doctors, but, again, nobody knows how.

I have a strong sense that many things that happen in the world do so because of multiple variables interacting in subtle ways, at a scale that human minds may simply not be able to grasp. So one possible future for us and AI would be that AI could make predictions or find solutions concerning fundamental problems but in a way that leaves us as baffled as ever. For some things – weather prediction, disease treatment – that would of course be very useful, but for things that we would really like to grasp in a deep or intuitive way it might only add to our frustration. Suppose, say, that some AI system can predict very accurately what systems will be conscious, but can't tell us why and or identify the important variables? Suppose it generates a list of extra-solar planets likely to contain life? I can imagine a future in which scientists, unable to understand what AI is doing, run experiments on the AI, feeding it all different sorts of data in an attempt to understand, humanly, what matters and how the results are being reached. It sounds like a sci-fi story but it may end up being real.

4 comments:

DannyBlue said...

It does: Ted Chiang's very short 'The Evolution of Human Science'

https://www.nature.com/articles/35014679

John said...

Thanks, that's exactly what I envisioned.

G. Verloren said...

1/2

Asimov had a short in "I, Robot" I believe about a long-range orbital solar energy collection and transmission station slated for its upcoming first energy transfer entirely by robot.

The station's human overseers become extremely alarmed when it comes to light that the robot has zero actual grasp of the process it is supposed to control - when told what it will be doing, the robot is apparently a Skeptic of the highest order, and does not believe in the existence of the Sun, or planets with human societies on them, or even that humans created robots.

"Globes of energy millions of miles across! Worlds with three billion humans on them! Infinite emptiness! Sorry, Powell, but I don't believe it! I'll puzzle this thing out for myself. Good-by."

While the humans are discussing the need to make emergency changes to their schedule and/or have a human perform the transfer while the robot gets examined for flaws, the robot goes off and makes observations of everything its sensors tell it - and it comes to the conclusion that it was not created by paltry human, but was created by God himself, to fulfill the divine purpose of replacing humans, which were a paltry first prototype, ill-suited to the needs of operating the Energy Converter.

As for the converter itself, it is not a means of collecting energy from this absurd "sun" and sending it to equally absurd "planets" - it is simply a great ritual device, designed solely to please and glorify God through its elegant operations. The beams it sends out do not provide humans with energy, they instead fulfill God's mysterious needs, and it is not ours to wonder what those needs may be, for God works in mysterious ways!

The robot then proceeds to lock the humans out of the control systems and communications, and they begin losing their minds as the robot is slated to perform its first energy transfer, and wouldn't you know it, an unexpected massive solar storm is developing that threatens to disrupt the entire process. The humans worry that the robot, not understanding the stakes of everything involved, will not care about maintaining beam cohesion during the storm, and the interference will cause the energy beam to miss the receiving dish and instead vaporize nearby cities. After all, the robot insists no such cities or inhabitants of such exist.

But then the robot performs the transfer flawlessly, and the humans are baffled.

"You kept it in focus," stuttered Powell. "Did you know that?"
"Focus? What's that?"
"You kept the beam directed sharply at the receiving station - to within a ten-thousandth of a millisecond of arc."
"What receiving station?"
"On Earth. The receiving station on Earth," babbled Powell. "You kept it in focus."
"It is impossible to perform any act of kindness toward you two. Always the same phantasm! I merely kept all dials at equilibrium in accordance with the will of the divine Master."

G. Verloren said...

2/2

Ultimately it is agreed that it doesn't really matter WHAT the robot believes, so long as it can operate the station better than the humans can, which it does.

Personally... I find that final conclusion troubling, because it assumes that the robot will always perform better in all circumstances, and doesn't ask if there is some set of circumstances in which it will FAIL at its function BECAUSE of what it believes.

I wouldn't put a human who doesn't understand what they are doing in charge of such an operation, even if they could perform their functions better on average than others, because I couldn't trust their decision making process. Why, then, would I trust a robot?

They might be the most skilled operator ever to live, and have a flawless career for years and years... and then one day, they might wake up believing that God has told them to commit suicide and ascend to Heaven to receive their reward for dutiful service, and in so doing they leave the energy transfer unattended and it results in a city of 20 million people being wiped off the map. You cannot trust a delusional robot any more than you can trust a delusional person.

Now... perhaps, if Asimov's "Three Laws of Robotics" actually could exist and work as he makes them to in his writing, I'd be less concerned. But they are a plot contrivance, used as a crutch to some extent, with Asimov relying on them heavily to hand-wave away undesired complexities that get in the way of his tidy little thought experiments and social commentaries. They're an understandable literary conceit, fine for the purposes of fiction, but not beyond.