In case anyone remembers amid the constant flow of content, I'm still playing around with the unsupervised learning algorithm I've invented. It stands apart from traditional neural networks, as it learns exclusively through observation, bypassing the need for training and inference phases. I'm currently letting it watch video games and then make predictions on how things behave in these. After my last two posts, I've received the following feedback:
To address this I sat down and started building simple scenes in the Unity SDK that contain more complex things than those I've found in retro or mobile games. I'm also able to better control the environment e.g. for noise or the paths the objects take. And of course I've put together another video that shows the results. The video overlays the actual footage the system gets to see (the moving drones or spaceships) and the prediction output (the funny little yellow and blue dots). Initially, there's two intricate objects tracing non-linear paths on the screen, one after another, each on distinct routes. Both originate at the same point (to demonstrate the system is capable of distinguishing between different objects) and the paths intersect. Next, the first object starts at the same location again and the system predicts its path, based on previously observed behavior. Same for the second object. This demonstrates that the algorithm
Happy to hear what else this showcase does NOT demonstrate, so I can include it in my next demo. [link] [comments] |