We are close to a world where most non-trivial software can be scaffolded and iterated by AI systems from a reasonably detailed natural-language spec. In my own work, this has already shifted the bottleneck away from implementation skill to something closer to problem selection, system boundaries, and restraint.
I wrote on this shift: from “how do I implement this?” to “what is worth building and what futures are we normalising when we deploy?”. I’m very interested in how people here, who think about AI systems at a larger scale, see this dynamic.
- If software becomes abundant, what are the new scarce competences?
- Do you see “choosing what not to build” as a meaningful lever, or is that naive given incentives and deployment dynamics?
[link] [comments]