I'm a CS student graduating next year, my education has been pretty crap, due to personal problems, and my school curriculum being pretty bad (we never met the requisite depth in any of the courses we did, and skimmed over a lot). After graduation, I want to take a few years (I'm thinking 3 - 6) off to do a lot of self study.
I'm not sure how my knowledge level compares to international standards, so just assume no prior CS knowledge (I'll skip things I already know satisfactorily, but I don't expect there to be anything I know deep enough that it would be worth skipping it completely). For mathematics, I am at highschool level (currently learning algebra and logic in my free time) sans calculus (which I never really learned), with a little discrete maths. I have no prior philosophy training, and it I sufficient to assume that the entirety of my philosophy knowledge is from Lesswrong.
I have a (set of) goals I want to achieve, and I want to learn the required computer science (among other things) in order to achieve my goal. I plan on pursuing a postgraduate degree towards that goal after my gap years (I intend to start producing original research in at most ten years, and most likely much earlier than that).
Goals:
- Formalise learning: Develop a model of learning (the way we have a model of computation). What does it mean for a learning algorithm to be better than another. Develop a method for analysing (I'm thinking of asymptotic analysis (at least as of now, all analysis I plan to do should be asymptotic)) (and comparing) the performance of learning algorithms on a particular problem, across a particular problem class, and across problem space using a particular knowledge representation system (KRS), using various KRS, and across the space of possible KRS. Develop a hierarchy of learning algorithms. Bonus develop a provably optimal learning algorithm.
- Formalise knowledge: Develop a model of a KRS. Develop a method for quantifying and measuring "knowledge". Develop a method for analysing and comparing KRS, using a particular learning algorithm, using various types of learning algorithms, and across the space of learning algorithms, on a particular problem, across a particular problem class, and across problem space. Develop a hierarchy of KRS. Synthesise the results of the above, and on work on formalising learning into a theory of knowledge ("knowledge theory"). Bonus: Develop a provably optimal KRS.
- Formalise Intelligence: Develop a model of intelligence, A method for quantifying and measuring intelligence of arbitrary agents in agent space. Understanding intelligence and what makes certain agent designs produce more intelligent agents. A hierarchy of intelligent agents. Is there a limit to intelligence? Bonus: Develop a provably optimal intelligent agent.
NB
"Develop" doesn't mean that one doesn't already exist, more that I plan to improve on already existing models, or if needed build one from scratch. The aim is model that is satisfactorily (for a very high criteria for satisfy) useful (I think our models of computation are satisfactorily useful. The end goal is a theory that can be implemented to build HLMI). I don't plan to (needlessly) reinvent the wheel. When I set out to pursue my goal of formalising intelligence, I would build on the work of others in the area).
How much CS/Maths/Analytical Philosophy/other relevant subject areas do I need to learn, what areas of CS/Maths/Analytical Philosophy/other relevant subject areas should I focus on, and how deep do I need to go? I want to prepare a complete curriculum for myself. I'd appreciate links to learning resources and recommended books, but I would also appreciate mere pointers.
I'm not familiar with the research in this area, I'm vaguely aware that AIXI is along the lines of what I want to do (haven't yet read it), but not much beyond that. I don't know what has and hasn't been done, I don't know what counts as reinventing the wheel. My aim is to do for Intelligence what Turing (and Church) did for computation .
Please help me.
[link] [comments]