https://communemag.com/optimize-what/
The course is about “deep neural networks,” a popular class of machine-learning models powered at their core by various optimization techniques. However, the optimization paradigms used here are not of the planning variety. The algorithm designer picks a heuristic objective function that incentivizes a good model, but there are no constraints, and there are no guarantees of obtaining a good solution. Once trained, the model is run on some sample data; if the results are poor, the designer tweaks one of dozens of parameters or refines the objective with an additional term, then tries again. Eventually, we are promised, we will arrive at a trained neural network that accurately executes its desired task.
The whole process of training a neural net is so ad hoc, so embarrassingly unsystematic, that students often find themselves perplexed as to why these techniques should work at all. A friend who also took the course once asked a teaching assistant why deep learning fares well in practice, to which the instructor responded: “Nobody knows. It just does.”
In fact, isn’t this free-wheeling, heuristic form of optimization reminiscent of how economics is understood today? Rather than optimization as planning, we seek to unleash the power of the algorithm (the free market). When the outcomes are not as desired, or the algorithm optimizes its objective (profit) much too zealously for our liking, we meekly correct for its excesses in retrospect with all manner of secondary terms and parameter tuning (taxes, tolls, subsidies). All the while, the algorithm’s inner workings are opaque and its computational power is described in terms of magic, evidently understandable only by a gifted and overeducated class of technocrats.
[link] [comments]