Tuesday, October 14, 2025

Here's an Idea for You

From Scott Siskind's ACX Grants post:

Aaron Silverbook, $5K, for approximately five thousand novels about AI going well. This one requires some background: critics claim that since AI absorbs text as training data and then predicts its completion, talking about dangerous AI too much might “hyperstition” it into existence. Along with the rest of the AI Futures Project, I wrote a skeptical blog post, which ended by asking - if this were true, it would be great, right? You could just write a few thousand books about AI behaving well, and alignment would be solved! At the time, I thought I was joking. Enter Aaron. He and a cofounder have been working on an “AI fiction publishing house” that considers itself state-of-the-art in producing slightly-less-sloplike AI slop than usual. They offered to literally produce several thousand book-length stories about AI behaving well and ushering in utopia, on the off chance that this helps. Our grant will pay for compute. We’re still working on how to get this included in training corpuses. He would appreciate any plot ideas you could give him to use as prompts.

1 comment:

G. Verloren said...

These people still imagine that AIs are "thinking". That they have thoughts, and hold positions, and make decisions, and are cognizant.

AIs merely "predict". They match data to data. Obviously if you feed it a certain kind of input, you'll see a matching kind of output (within certain parameters). Changing the inputs will, of course, change the outputs. But the outputs are not any kind of indicator of the machine having thoughts or opinions.

AIs don't have any understanding of what "dangerous AI" is. They don't have any understanding of anything. If you input a bunch of books about dangerous AI, they will output words and sentence structures that roughly match those inputs - but those words and sentences do not have meaning for the machines.

It's like a lyre bird imitating the sound of a camera shutter, or of a car alarm, or of a chainsaw. The bird has no conception of what a chainsaw actually is. If it makes that sound at you, it is not somehow trying to communicate a threatening intent to saw you in half. If it makes the sound of a car alarm, it isn't trying to warn you a vehicle is being stolen. It is merely MIMICKING these things, without having any real UNDERSTANDING of them. There is no context, and so there can be no meaning.

So-called "AIs" fundamentally have zero context for anything. A bird can at least develop context by forming basic connections - if you play classical music every time you feed a bird, it will come to associate the sound of the music with the arrival of food, and expect the two to coincide. Et cetera.

But these machines do not have that capability. They simply mimic, and their programming is designed such that it self adjusts to match desired mimicry parameters. They literally operate on trial and error - they process data, then get graded on the accuracy of their output (which inherently requires human feedback about the results to be input into the program), which then increments an integer on a sliding scale, which then affects how it processes the next bit of data, and so on.

Which is all to say that if you input "dangerous" novels and get matching text back, that isn't somehow teaching the machine to be malicious. It has no capacity for malice. It has no capacity for thought. It is just outputting words that match other words, without understanding any meaning. It is pure mimicry.