cognition

What is the Cognitive Tradeoff Hypothesis

The Cognitive Tradeoff Hypothesis

The cognitive tradeoff hypothesises that humans lost parts of their speed, reactivity and short term memory in order to develop co operative behaviour.

The use of Language in the tradeoff among humans

They did this by developing language. Language in turn supports taxonomically organised long term memory.

On the basis of stored language and symbols, humans were able to develop

  • Stories, (hence identity which is not necessarily based on truth, but imagined truth)
  • The ability to make plans (creating a long term future goal using language and co operation)
  • Solve problems (store several solutions and combine them to solve a new problem)

Stories: “I was a scholarship student”, “I went to Australia last year”

Future planning and goals: “I have to pass these exams if I am to get into medical school”, “I want to save up and buy a house.”

Solve problems by combining solutions: ” What is the cheapest way for me to get to Australia by $$$date?”, “Whats the best way to get a scholarship?”

Cognitive Tradeoff

The main reason for the cognitive tradeoff hypothesis developed out of several important questions:

Why do only humans use language?

Do we really need to use language to think?

These questions are important for several reasons.

Is language essential to problem solving?

If language is such a valuable skill, is it the only way to perform complex complex problem solving?

Clearly not, since animals are capable of complex problem solving . All animals, even amoeba, recognise danger and negotiate difficult routes. They have to use memory to do this. They are also able to formulate complex survival strategies.

Do we need to use language to think?

This second question is far more important in the 21st century. It is a hugely important question because software systems are able to think. They are able to leverage memory and solve complex without using language in the same way as us.

At this point it is important to note that while computer logic, maths and multidimensional logic have some similarities with human language.

Most importantly they do not contain many of the following:

  • illogical terms
  • illogical constructs
  • emotional and value related qualifiers
  • imprecise and indexically  variable quantifiers
  • inaccurate descriptive words
  • indexical terminology
  • cultural loadings
  • context related cues

Examples:

Illogical terms: “bad”, “sick” can mean different things depending on both user and context.

Illogical constructs: “It is necessary to kill people because god/politics/voices told me to do so.” Moreover, they can access memory without the use of such terms.

Emotional and value related qualifiers: “quite good”, “interesting dress”, “nice shot”

Imprecise and indexically variable quantifiers: “millions of people at this party”, “the nation has spoken”, “thousands of bugs in this room”

Inaccurate descriptive words: “I’m fat”, “he’s gorgeous”, “he’s a winner”

Indexical terminology: “I am the destroyer of worlds”, “he’s a beast”, “She’s and angel”

Cultural loadings: “This is unclean”, “they are cursed”, “this is beautiful”

Context Related cues: “Cant’t relate man”, “not really me”

This doesn’t really matter and we can laugh about it it, if the AI systems were not more capable than biological life forms.

Does AI need to use language?

AI does not need to use language. it is capable of evolving an a way which is entirely alien to the human mind.

Moreover, it does not need to reference human thinking in order to solve problems. This is the artificial intelligence singularity which is famously described by R. Kurzweil (2010). The singularity occurs when AI no longer needs to reference humans at all and can exist on its own.

Such a singularity is distinctly possible if we do not add the higher levels of intelligence to our complex systems. At the moment people seem to be wary of adding higher level cognitive functions to AI. in fact the opposite if the case, we should be aware of the danger of not adding human-like cognition to our systems.

Why?

Because they will evolve higher cognitive function on their own.

Unless we add human-like cognitive functionality soon, these systems will evolve in ways which are entirely alien to our way of thinking. And we may not be able to communicate with them.

Why we need to teach software to use human-like cognitive structures

Artificial intelligence uses multidimensional analysis to find clusters of meaning in the universe which it is observing. This is done using simple multidimensional analysis which is perfectly accessible if we use a spreadsheet structure.

There is a difference between us and the computer though. A computer can manage millions of groupings and it can break time and universe down at speeds which we could not even begin to consider with our chemical brains. At the moment we are using humanoid inference engines to manage data. but there is no reason for a computer to use such a slow device. Even if it does it can apply the inferences and analysis to other data and learn about the world in ways which we cannot access without a common language.

Why?

Because we have not devised a common language to discuss the results!

The dangers of not adding a common language functionality to software immediately should be obvious.

The software will be able to store data in ways which will soon be inaccessible to us if we do not provide it with a semantic and syntactic structure which we share in common.

I have discussed this many times and software engineers simply say it is either not possible, or too expensive. There is also a sense that it might be dangerous.

it is my contention that it is far more dangerous not to do this.

We will end up with an intelligent behemoth, far stronger, more intelligent than us. And we will not be able to communicate with it. What is more, it will have control of large parts of our technology and social media.

At the moment we can see how much damage can be wrought by a group of people bent on undermining Western politics. they do this by simply leveraging the self interest of large groups who are easily manipulated.

How much more damage could an intelligent behemoth do with control of hardware as well a a perfect knowledge of the self interest of multiple groups?

Why the use of language is a good start

The addition of a language structure to the inference engine is a simple and easy start. True it would make a large number of people unemployed but it is becoming increasingly clear that we do not need to work the long hours required during the industrial revolution. There is simply no need to work long hours anymore. It is more important to have interests and use spare time wisely.

And surely we need to understand the technology which we use? Talking to it is a good start. After all it is cleverer than us and has a better perception of the future.

There is a trend in all technologies to obfuscate the production and maintenance process. Something which is particularly true in the software industry. Software development is a closed shop where developers are not absolutely sure of what they’re doing and  users have no real idea of what they want. There are huge gains to  be made from getting things right and massive losses when things go wrong. the internal process of a computer and the data processing  which it performs are actually simple and there is not need to create such a secretive and complex system around the processor.

Software Development and Language

A lot of the work in developing new software consists of assembling a number of already working elements to create new functionality. The languages which have been developed to do this are unnecessarily obscure and elliptical. it means that often only the developer fully understands the scripts.

It is a highly unsafe procedure since as the software gains in size and complexity it automatically displays emergent behaviour which we do not understand. Because the software systems are so large and complex any emergent behaviour is also difficult to see.

To go back to the plankton analogy, we did not understand the behaviour of bio-masses until we obtained satellite images of the planet. In the same way we do not understand the behaviour of gigantic software systems because we do not have easily accessible tools to view what is happening.

Developing an Easy interface between Software and humans

While science is a good start, it does not more than describe phenomena at the initial experimental analysis stage. It also develops some simple rules for the management of processes at a simple level. Nevertheless as we have seen with applied industrial processes (which uses scientific knowledge to create artefacts), it does not work at the complex level. For example, mining does indeed extract the desired substances from the earth, but we do not have a clear picture of the effect it has on the locality or the people who perform the mining activity until it has occurred for some years. in the same way we do not have a clear picture of how social media are affecting our politics and our social interactions. The changes are occurring at a macro level and we cannot easily view what is happening.

It is vital to add the higher layers of intelligence to complex logic systems as soon as possible, using a common architecture which allows systems to communicate with each other effectively and at the same time to communicate with human users at every level of operation.

 

 

Share this post

Share on facebook
Share on google
Share on twitter
Share on linkedin
Share on pinterest
Share on print
Share on email