Friday, October 23, 2020

Deep Minded Acme Spiders

The Google Deep Mind projects include some demos and examples that sound like an excellent base for what I want to try for the SpiderDroid.  Now let us see if I can absorb enough new comp sci concepts to produce any interesting behaviors.

Tuesday, October 13, 2020

SpiderDroid Rewards

The core mechanism of using machine learning to "solve" games is building in an incentive factor that the machine learning algorithm will use to prioritize and identify the best values for the parameters it can control.  For my experiments I think I will need some external pure incentive and disincentive stimulus that can be used for any early assisted learning stages that are required.  This is because I expect the learning process that will connect actuator parameters with movement results will be full of dead ends, and I want to be able to try to back the ML algorithm out of them and encourage the patterns that look more productive.

However, it seems relevant to mention the unfortunate babbling idiocy that has eventually forced me to erase and restart all of the early (local brains only) voice to text translators I tried.

Caveat creator.  Do I need a kill switch?


I will know that I have reached a new plateau when my spider droids can learn movement on their own by experimentally moving whatever collection of active joints I have given them.  I expect that discovering the most productive movements and combinations of movements in terms of visibly and inertially detectable changes in position and location of a single eye cluster, and detectable changes in sound composition and volume at the single ear cluster will be the first incentivized results.  If I get that far, there will be others even more interesting.