Ray Kurzweil On Future AI Project At Google
Here is a good 11 minute interview with Ray Kurzweil.
In the past Google has been fairly open with publishing details of how their infrastructure works (e.g., map reduce, google file system, etc.) so I am hopeful that the work of Ray Kurzweil, Peter Norvig, and their colleagues will be published, sooner rather than later.
Kurzweil talks in the video about how the neocortex builds hierarchical models of the world through experience and he pioneered the use of Hierarchical hidden Markov Models. It is beyond my own ability to judge if HHMMs are better than the type of hierarchical models formed in deep neural networks, as discussed a lot by Geoffrey Hinton in his class "Neural Networks for Machine Learning." In this video and in Kurzweil's recent Authors at Google talk he also discusses IBM's Watson project and how it is capable of capturing semantic information from articles it reads; humans do a better job at getting information from a single article, but as Kurzweil says, IBM Watson can read every Wikipedia article - something that we can not do.
As an old Lisp Hacker it fascinates me that Google does not use Lisp languages for AI since languages like Common Lisp and Clojure are my go-to languages for coding "difficult problems" (otherwise just use Java <grin>). I first met Peter Norvig at the Lisp Users & Vendors Conference (LUV) in San Diego in 1992. His fantastic book "Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp" had just been published in 1991, as had my much less good Springer-Verlag book "Common LISP Modules: Artificial Intelligence in the Era of Neural Networks and Chaos Theory." Anyway, it is not for me to tell companies what programming languages to use :-)
I thought that one of the most interesting parts of the linked video was Kurzweil's mention of how he sees real AI (i.e., being able to understand natural language) will fit into Google's products.
While individuals like myself and small companies don't have the infrastructure and data resources that Google has, if you are interested in "real AI", deep neural networks, etc. I believe that it is still possible to perform useful (or at least interesting) experiments with smaller data sets. I usually use a dump of all Wikipedia articles, without the comments and edit history. Last year I processed Wikipedia with both my KBSPortal NLP software (which, incidentally, I am hoping to ship a major new release in about two months) and the excellent OpenCalais web services. These experiments only took some patience and a leased Hetzner quad i7 32GB server. As I have time, I would also like to experiment with a deep neural network language model as discussed by Geoffrey Hinton in his class.