DeepMind, the people who gave us world-beating programs for chess, go, and Asteroids, have created an artificial intelligence system called AlphaFold that predicts the folding of proteins in a few hours. It's a very hard scientific problem that has lately been taking months or even years of laboratory work for each protein studied.
DeepMind trained the program by feeding it the data on known proteins and letting it generalize, then entered it in the Critical Assessment of Protein Structure Prediction, a sort of contest established back in 1994 to track efforts to solve the problem. AlphaFold's score at CASP was actually higher than traditional laboratory methods.
This is a very cool scientific advance, but it does raise the question of what scientists will do when all the hard problems can be solved by AI faster than people can do it.
"This is a very cool scientific advance, but it does raise the question of what scientists will do when all the hard problems can be solved by AI faster than people can do it."
ReplyDeleteTo quote an 'xkcd' comic strip, "Mission. Fucking. Accomplished."
No scientist wants to spend months working on protein folding. There are countless better things they could be doing with their time, rather than extreme tedium.
People always worry about "what will group X do once automation takes over", and the answer is always "something much more useful that can't be automated".
For example, when ATMs were first being introduced, there was much wringing of hands about how it threatened to destroy the careers of every bank teller on the planet - "What will they do when people can just make withdrawals or deposits via a machine?" It turns out, plenty - customers still have many needs beyond the services ATMs can provide, and you also need people to maintain the ATMs or do their jobs if the ATMs are out of service temporarily.
Auto pilot caused a panic in the airline industry, except it turned out you still want an actual human being on hand, even if they're mostly just there to perform landings and takeoffs and handle emergencies, and there really aren't any fewer pilots than there used to be.
Self driving cars have truckers spooked that they'll be rendered obsolete, but I think all that such vehicles will do is shift the specific tasks that the job of a truck driver entails - instead of doing the actual driving, they will simply focus more on the details of operating a truck that require human hands, like vehicle maintenance en route, cargo management, security and theft prevention, taking manual control as required for unexpected road conditions, writing and logging reports, et cetera, et cetera.
The fear that automation will ruin lives is misplaced - anybody whose job requires on doing tedious, repetitive, unpleasant tasks can and should be put to much better use doing something else - and even in the rare cases where there isn't much else to do, if we can achieve the same level of production with less human toil and misery, how is that a bad thing?
"This is a very cool scientific advance, but it does raise the question of what scientists will do when all the hard problems can be solved by AI faster than people can do it."
ReplyDeleteUBI.
There is nothing--nothing--that humans do that AI will not eventually do better. This includes writing poetry and things requiring manual dexterity, though the latter may take a while.
ReplyDeleteI find this absolutely terrifying, and no amount of historical analogies along the lines of "we were afraid of X and nothing happened" is going to make me feel better. That said, how I or anyone else feels about it is almost certainly irrelevant, since there is probably nothing that can stop it.
On the other hand, for anyone troubled by meritocracy, there is the grim satisfaction that even the greatest overachievers are going to be just as outmoded by the machines as the rest of us. It'll be something to think about as our atoms are reconfigured into computronium.
@David
ReplyDeleteIf we reach the point where AI can do those things better, we'll be on the verge of a post-scarcity society anyway.
It always boggles my mind how people fear the idea of a future in which no one has to work to survive. The rat race is not a virtue - it's a necessary evil, and if we can achieve the same production without forcing the rats to toil, what's the issue?
I'm not afraid of automation - I'm afraid of people clinging to systems like Capitalism in an increasingly automated world. I'm afraid of the benefits of automation being stolen from humanity by the greedy few - evil dragons luxuriating on mounds of stolen, blood-stained gold for which they have no use.
Right now, this very moment, we could feed every person on the planet - but we don't. People starve to death every single day, but for some reason we accept that tragedy as the price of propping up the ultra rich for no apparent reason.
We have the medicine to treat chronically sick individuals, and yet we allow pharmaceutical companies to inflate the prices to thousands of times what it costs for them to manufacture those drugs, solely because the market will bear it and because we value corporate profits over human lives.
Automation doesn't threaten the world - the rich who we refuse to challenge do. Automation isn't going to drive millions to despair, poverty, and death - our warped priorities and cultish worship of naked greed and the pursuit of profits will.
The issue isn't the tools - the issue is how we allow them to be used. The problem is the people we allow to have power over others. The problem is the culture of selfishness and exploitation that we accept as normal, natural, necessary, and inevitable. The problem is we unjustly hate the poor and slavishly worship the rich. The problem is who we choose to be as people, not the technology we possess.
We could be be building bridges, and instead we build bombs. We could be curing the sick freely, and instead we hold the cures hostage for profit. We could house every human on earth, and instead we have elevate real estate tycoons like Trump who evict tenants into the streets while they themselves live in gold-encrusted penthouses. We could feed every starving child, and instead we expect the poor to live on cheap processed junk food while the obscenely wealthy put literal gold on their bread.
Automation isn't the problem. Inequality is the problem. All automation does is make the pie bigger - it doesn't force us to give smaller and smaller percentages of that pie to the masses in favor of further stuffing the faces of the wealthy aristocracy.
I don't fear a future in which no one has to work to survive. I fear a future in which we are at the AI's mercy.
ReplyDelete@David
ReplyDeleteYes, because that's so much worse than being at the mercy of our fellow human beings...
To be fair, I don't think you have anything to worry about. "Strong AI" is still purely theoretical - and even if it ends up being possible, it may well be beyond realistic human capacity to actually create. You might as well worry about a future in which bio-engineered "uplifted" dogs and cats rule the world, because that's probably just about as likely an outcome.
All we have on earth at the moment, and for the foreseeable future, is just extremely rapidly iterating trial and error machines. They're still just very advanced calculators, which only produce the results we design them to produce - even if we don't always think through all the ramifications of the parameters we establish for their operation and usage.
Like so many other pieces of modern technology, they're not particularly dangerous in themselves - it's the human element that makes them problematic. But that's literally true of everything.
Are you afraid of automobiles, because of their incredible potential for destruction and bodily harm? No, of course not! You have faith that the other people on the road will choose NOT to plow their vehicles into your own and kill you, despite it being something they could do at any time.
Are you terrified of the army, because of their vast arsenal of weapons that they could absolutely use to kill you and your loved ones at a moment's notice? Seemingly not! You trust that the military isn't going to bomb you out of the blue, because why would they?
So why are you scared of "AI", which is really just very fancy algorithm tricks? Like everything else that could be used to ruin your life by other people, the danger is in the human element - as long as we take the proper steps to ensure that people misusing "AI" is about as uncommon as people using cars to run people over, none of us should lose any sleep over anything.
If we're not losing sleep over the threat of the nuclear weapons we've been living with for the past several generations, then I fail to see how even the most advanced "AI" systems we'll be able to make in the next few centuries could POSSIBLY compete with them as a legitimate cause for concern. Worry about the people, not the tech.
In fact, a lot of very smart people share my worry. Or more accurately, they've taught me to share theirs. Stephen Hawking was one of them. Eliezer Yudkowsky is another.
ReplyDelete@David
ReplyDeleteI'm not saying it's a complete non-concern. I'm saying it's no worse than countless other things we've already dealt with successfully. If we can survive the invention of nuclear weapons, we can survive automation and Weak AI.
We just need to be smart about how we use them - just like every dangerous thing we invent. The danger isn't the technology so much as the people.
Now, Strong AI might be another matter, but that's a lot like worrying about aliens landing in Central Park. It's not worthless to speculate about it and give thought to how we might or should respond to such a development, but don't lose sleep over it.