how many layer/nodes/link weight does Alpha go has?
any know the inside information of neural network model of Alpha go? like how many layer, node, link weight, activation/error function? submitted by /u/faxfrag [link] [comments]
any know the inside information of neural network model of Alpha go? like how many layer, node, link weight, activation/error function? submitted by /u/faxfrag [link] [comments]
submitted by /u/shaunlgs [link] [comments]
submitted by /u/ShapewaysSavannah [link] [comments]
submitted by /u/sathyasingh1991 [link] [comments]
I’m a CS student graduating next year, my education has been pretty crap, due to personal problems, and my school curriculum being pretty bad (we never met the requisite depth in any of the courses we did, and skimmed over a lot). After graduation, I want to take a few years (I’m thinking 3 – 6) off to do a lot of self study.
I’m not sure how my knowledge level compares to international standards, so just assume no prior CS knowledge (I’ll skip things I already know satisfactorily, but I don’t expect there to be anything I know deep enough that it would be worth skipping it completely). For mathematics, I am at highschool level (currently learning algebra and logic in my free time) sans calculus (which I never really learned), with a little discrete maths. I have no prior philosophy training, and it I sufficient to assume that the entirety of my philosophy knowledge is from Lesswrong.
I have a (set of) goals I want to achieve, and I want to learn the required computer science (among other things) in order to achieve my goal. I plan on pursuing a postgraduate degree towards that goal after my gap years (I intend to start producing original research in at most ten years, and most likely much earlier than that).
“Develop” doesn’t mean that one doesn’t already exist, more that I plan to improve on already existing models, or if needed build one from scratch. The aim is model that is satisfactorily (for a very high criteria for satisfy) useful (I think our models of computation are satisfactorily useful. The end goal is a theory that can be implemented to build HLMI). I don’t plan to (needlessly) reinvent the wheel. When I set out to pursue my goal of formalising intelligence, I would build on the work of others in the area).
How much CS/Maths/Analytical Philosophy/other relevant subject areas do I need to learn, what areas of CS/Maths/Analytical Philosophy/other relevant subject areas should I focus on, and how deep do I need to go? I want to prepare a complete curriculum for myself. I’d appreciate links to learning resources and recommended books, but I would also appreciate mere pointers.
I’m not familiar with the research in this area, I’m vaguely aware that AIXI is along the lines of what I want to do (haven’t yet read it), but not much beyond that. I don’t know what has and hasn’t been done, I don’t know what counts as reinventing the wheel. My aim is to do for Intelligence what Turing (and Church) did for computation .
Please help me.
submitted by /u/Dragon-God
[link] [comments]
For all machine learning and AI enthusiasts, there is a great chance to take part to a project which is going to be a milestone in the history of Go software.
What can you do? You can offer your machine time and run a tiny self-playing software generating new games and data to be used for training Leela Zero and make it stronger and stronger.
We need your help! We need everyone’s help in order to speed up the training process. Be part of a great open source and community driven project and let’s all make history!
Please visit the links below to know more. Thank you.
https://github.com/gcp/leela-zero
https://www.reddit.com/r/cbaduk/
General information about Go game: https://en.wikipedia.org/wiki/Go_(game)
General information about Go computing and software: https://en.wikipedia.org/wiki/Computer_Go
submitted by /u/therazorguy
[link] [comments]
I’m currently trying to formulate a research project as part of my undergraduate graduation requirement. I’ve always had an interest in AI research, and recently I’ve also developed interest in topics related to logic in computation (SAT solvers and such are neat). As such I was hoping to try and develop some sort of project that would explore AI as it relates to logic, but I’ve been having some difficulty finding recent research in such topics.
Is heavy use of logic more or less obsolete in modern AI research? It seems like machine learning is by far the most prominent field of AI research these days, is this just because of all the hype surrounding ML or is it because other approaches have been dead ends?
In the case that trying to come up with a research project relating logic to AI isn’t a dead end; do you have any suggestions of recent literature about logic in AI or useful vocabulary that would help me find such literature? (My knowledge about logic in computer science isn’t very extensive, so I’m suspecting that I just don’t know enough relevant topics/subfields to find relevant research…)
submitted by /u/QuarterTortoise
[link] [comments]