EF: What are the implications of machine learning, if any, for regulators?Via Marginal Revolutions.
McAfee: It is likely to get a lot harder to say why a firm made a particular decision when that decision was driven by machine learning. As companies come more and more to be run by what amount to black box mechanisms, the government needs more capability to deconstruct what those black box mechanisms are doing. Are they illegally colluding? Are they engaging in predatory pricing? Are they committing illegal discrimination and redlining?
So the government’s going to have to develop the capability to take some of those black box mechanisms and simulate them. This, by the way, is a nontrivial thing. It’s not like a flight recorder; it’s distributed among potentially thousands of machines, it could be hundreds of interacting algorithms, and there might be hidden places where thumbs can be put on the scale.
I think another interesting issue now is that price-fixing historically has been the making of an agreement. In fact, what’s specifically illegal is the agreement. You don’t have to actually succeed in rigging the prices, you just have to agree to rig the prices.
The courts have recognized that a wink and a nod is an agreement. That is, we can agree without writing out a contract. So what’s the wink and a nod equivalent for machines? I think this is going somewhat into uncharted territory.
Friday, April 12, 2019
What if the Robots Collude?
Big firms increasingly use algorithms to set prices, especially in financial markets. Some of those robots are neural networks trained by machine learning, which means that nobody has explicitly programmed them. Researchers have already shown that computers can decide that the best strategy is to fix prices with their competitors, and this worries regulators. This is from an interview with Preston McAfee of the Richmond Federal Reserve Bank:
Subscribe to:
Post Comments (Atom)
2 comments:
Machine learning in no way precludes programming in certain hard restrictions.
You can build an algorithm for learning how to play chess by emulating human players the computer goes up against, but you can build into that algorithm a hard limit which prevents it from ever moving certain pieces in certain ways.
In fact, we already do this - the computer is held to the rules of chess, and can't algorithmically "learn" to perform moves that aren't legal in chess, even if it witnesses a human player make an illegal move.
Price fixing is ultimately a form of price inflation. The involved parties agree to set prices at a point higher than competitive pricing would produce, allowing all parties to have larger profit margins by refusing to undercut one another.
If you design an algorithm to achieve competitive pricing, the only way it could possibly decide to NOT price products competitively is if you allow it to operate on the assumption that competitors will not attempt to undercut your price.
And if you've built an algorithm which doesn't assume competitors will undercut you, then by definition you haven't built it to achieve competitive pricing. At which point, you start looking pretty guilty in terms of collusion - especially if someone else's algorithm also doesn't assume that you will undercut it.
I doubt this is as difficult as it's being made out to be, but what will probably happen is law suits. The law suits will decide law, responsibility, and accountability.
Post a Comment