Something I’ve always been interested in is machine intelligence. Before venturing into this subject, I should give my definition of intelligence. Firstly, I always use the term ]machine instead of artificial intelligence, for the simple reason that we’re discussing something that really is intelligent. Sometimes I use the term ‘adaptive logic’.
Also, the type of genuine machine intelligence I’m writing about isn’t of the variety that exhibits human characteristics. There’s too much emphasis on trying to give non-thinking systems personality.
The too-often mentioned Turing Test is a very poor one, as it tells us nothing about whether a machine can think – it only reveals a program’s capacity to give a human-like statement that was written by a human programmer anyway. The program itself merely does exactly what it was programmed to, and nothing else. Genuine machine intelligence involves a system’s capacity to adapt and learn.
While we aren’t anywhere near manufacturing the androids and robots of science fiction (robotics is another subject), there have been several major advances that leads me to think we are very close to creating something that can actually think, adapt and learn.
// Blue Brain Project //
Hardware-based neural network systems are extremely limited in their capacities to learn, and while there are many practical uses for them, the most advanced can only handle really basic tasks. Simulated networks are much easier to create. Even I was able to create a 5,000 neuron simulation that could learn and recall half a dozen shapes. The cost of doing this with actual PICs and the amount of time it would take would have been prohibitive.
A breakthrough was made a couple of years ago by the Blue Brain Project in which a neocortical column of a rat was simulated with the help of over 8,000 processors. The simulation itself, included 10,000 virtual neurons. It is different from previous efforts, as the BBP team were simulating it down to the cellular scale in an attempt to emulate accurately the biological model.
At first, electrical pulses had to be simulated to get any reaction. After some time, the neurons started to interact and organise by themselves, and gradually 10 million connections were formed.
A number of replicated neocortical columns would form a neocortex, and this is where the Blue Brain Project team believe they will see reasoning and spacial awareness appear as emergent properties. Perhaps these capacities will develop into something more complex.
// Hexapod //
Another development has been something called the Hexapod, created by Micromagic Systems. By itself, the Hexapod is just a frame with a number of servos and a servocontroller. It became a lot more interesting when the whole thing was linked to a tiny camera and a laptop running an image recognition program. At first it was able to track, follow and react intelligently to a bright round object. By the time it was displayed in a science museum, it was capable of distinguishing and reacting to human faces.
// The Internet //
Many have asked ‘will the Internet become intelligent?’. I believe it already is, to a very limited extent, and with a low-level form of machine intelligence. It may even have a dissociated form of consciousness as well. I believe this will become more apparent as the scale and complexity of the Internet increases.
In very basic terms, the private networks on the Internet can be thought of as neurons exchanging data. In reality, there are now countless networks constrantly exchanging massive amounts of data. We also have the development of more dynamic and interactive web pages that read and write information to hosted databases using SQL commands, so in a sense it’s an automated learning system, although the databases themselves are mostly isolated from each other and human input is still needed at some point.
The Transmission Control Protocol, one of the very basic components of the Internet, determines how data is exchanged. It ensures data is sent and received via the most efficient route between two locations, and if part of the Internet is damaged or unusable, the data is rerouted so it still reaches its destination. If a data packet is corrupted another one is requested from the source. So here we have another essential component of an intelligent system – it’s adaptive.
Just as I was about to post this, another ICT tech made a very similar point in a separate discussion.
His idea is a packet error during a Cyclic Redundancy Check in TCP could start a cascading effect from one node to another, eventually leading to emergent behaviour. It’s quite a likely scenario as well.