English (United Kingdom)French (Fr)Russian (CIS)Espa
Home Library Tutorials Neurohacking Tutorial 7 - Imagination & Related Abilities - Perception From the Bottom Up
Neurohacking Tutorial 7 - Imagination & Related Abilities - Perception From the Bottom Up PDF Imprimir Correo electrónico
Valoración de los usuarios: / 87
PobreEl mejor 
Neurohacking - Tutorials
Escrito por NHA   
Jueves 13 de Octubre de 2011 19:35
Índice de Artículos
Neurohacking Tutorial 7 - Imagination & Related Abilities
Network 3 & Mirror Neurons
Perception From the Bottom Up
What Happens if Things Go Wrong
The Mind's Inner Model
NHA Guide to Methods & Technology
The Most Important Bits to Remember
Hacks & Exercices
Notes, References & Answers
Todas las páginas

 

 

 

Perception From the Bottom Up


How cells perceive: the doors of perception

 

To understand the process of imagination, and how a concrete process of physical motion can be turned into an abstract idea, we need to look at what perception really is.

Sentience is the ability to receive information from the senses. Perception is the ability to interpret that data and give it meaning. All healthy life, even a single cell, achieves some form of perception of, and response to, its environment; all life deals with input and produces output that is behavioral and that is to the organisms’ advantage. Every cell has its own little life in which it is born, grows, reproduces and dies, and in the meantime it perceives, responds and adapts. It can take steps to protect itself from harm, seek nutrients, and communicate with its neighbors.

Your brain is a series of cell ‘neighborhoods’, architecturally directed by your mind, but right down at the cellular level, the process of perception is simply mechanical and automatic interaction.

All basic output (changes of behavior in the system) in living systems concerns behavior as motion. A cell for example only has three behavioral directives:

  • Don’t move (relax, digest, consider, grow, develop)

  • Move towards X and make contact (stretch, explore, grow, develop, attracted to input that is beneficial to development)

  • Move away from Y and avoid it or get rid of it (repelled from input that is harmful and protection is needed)

 

These are the basics of every creature’s overall interaction with nature, each behavior leading to success when done at the right time in the relevant circumstances. The first two form the stretch-relax cycle of learning (and note that just as much learning & developing is done when you are relaxing as when you are exploring). The third basic movenets is for our protection, when learning must be interrupted for example during a health threat or impending danger.

But the important thing to grasp is that all movement (processing) inside the cell is in the service of either our development or protection.

This basic 3-way behavior is discernable in all life; from plants and amoebae to humans. Large, multi-celled creatures like us perceive input and respond with 'behavior' as whole units; body and brain and mind all working together (hopefully) to do exactly the same thing that the single-celled ameba is doing -process the input and come up with appropriate outputs of beneficial behavior.

This is what our complicated nervous systems can do best. But everything with even the merest hint of an elementary nervous system can do this; even single cells. Every one of your brain cells is a tiny little organism; able to interactively perceive input in its own environment (its context) and respond with the appropriate cellular behavior as its output.

The 'brain ' of a cell is on its outside. Every animal cell has a ‘skin’ (called a membrane) that acts as its brain and nervous system. The membrane contains sensory receptors (like your senses), and ‘effectors’ (structures that can cause physical motion to do work in the cell). The cell also has 'innards'; organelles (organs) including reproductive organs in its nucleus that contain your DNA.

If you watch a single brain cell in a test tube full of nutrients (if you haven't got one lying around right now, watch some of the single-celled creatures to be found in any drop of pondwater, below) you will see one of these three ‘types of motion’ programs running:

 

Here are two amebas having lunch together. This is 'move towards'. The cells move towards warmth, light and nutrients, and move away from cold, too much heat, or poisons. They also relax, stop moving and digest their food. The membrane separates the outside of the cell from the inside, like your skin, and provides protection for the cell’s contents, but also allows interaction via receptors (sensors), for growth & development to take place.

Different receptors (too small to see at this resolution) on the membrane are like radio or TV antennae that detect signals from different inputs (like our senses). Receptors are input devices.

Another type of molecule in the cell membrane is an effector. An effector binds to a protein and this alters the activity of that protein. Effectors are output devices (they cause a change in behavior of the system).

Each brain cell ‘translates’ signals of chemical information about its context (neurotransmission, nutrients, toxins) into sensorimotor information (sensation as movement) like this: when a chemical signal makes contact with a receptor it causes a physical shape-change on the other end of the receptor (inside the cell). When the signal stops, the shape changes back. Let’s have a look at this in a model close up:

 

The orange ‘plasma membrane’ is the skin of the cell. In (a) we can see the receptor sticking out of the cell. (The spiky end is outside the cell, the other end is inside the cell).

In (b) an ‘agonist’ signal (a bit of relevant chemical) has floated by outside the cell and stuck to the end of the sensor. In this case it is a molecule shaped like a yellow diamond; the right shape to ‘fit’ the receptor. It could be a bit of food floating around for an ameba, a molecule of neurotransmitter for a brain cell, a light frequency hitting a retinal cell, or a scent molecule if this particular cell was in a network up an animal’s nose (receptors are specific to certain types of signals and are ‘tailored’ to suit the cells’ surroundings, so they can ignore anything that doesn’t fit).

 

Notice that the signal’s presence triggers a response: it causes the receptor itself and also the “G proteins” (the little pink and red shapes inside the cell) to change shape and separate.

Once they have done so, one of them can interact with an effector in (c) which responds now that the protein is a ‘relevant’ shape for it to recognize.

Mechanically, proteins are like ‘transformer’ nanobots programmed by their environment (via receptors) to alter their shapes and join together or split apart to form tools that turn on and off the machinery of cell processing and perform its various tasks.

 

Epigenetics

Hopefully you can see how cell receptors and effectors work together –when an environmental signal triggers the receptor (stimulus), the end inside the cell changes shape (response). The ‘inner’ end of the receptor now ‘fits’ the protein that latches onto it, and in doing so the protein changes shape itself and signals the effector. (The ‘nanobot’ proteins change their shape when given a signal because signals change the tiny electrical charges in their molecules). This shape-shifting or ‘transforming’ is actual physical motion, which uses energy and is used to do work in the cell.

Many of the cell’s processing tasks will require that genes be activated or deactivated to code for the needed proteins. When a signal comes in that a gene product is needed, the signal comes from the cell’s own environment, not from some emergent property of the cell itself. It is these signals originating from the environment directly outside cells that activate the expression of a gene.

All the movement happens inside the cell -cells don’t wander about in the brain to chat in response to signals, in their fixed context, the behavior ‘move towards’ is accomplished by growing more connections and sending more signals between cells, and ‘move away’ is accomplished by pruning their own connections away and sending fewer signals.

These processes can only be accomplished by gene switches initiating changes in the genome, and the only triggers for this process come from the cells' own environment.

An organism’s interaction with the environment changing the expression of its genome is an example of epigenetics. This is how epigenetic changes begin; at the cellular level –something in the cell’s environment causes it to respond by stretching or relaxing -expressing (or suppressing) its own activity, the end result triggering the expression (or suppression) of a gene or genes.

Sustained or frequent signals have a long-term effect, and this is why epigenetic hacking requires sustained maintenance of habits that exert control over cellular environments (e.g. control of your blood pressure, nutrition or neurochemistry) and also why it is long-lasting and rarely causes snapback.

The tiny moves that cells make inside themselves are what enable all the physical and mental processes that trigger all the resulting behavioral moves we make. Cells’ moves are based on their perception of their environment because they assume their perception is an accurate reflection of what’s going on in our environment. They believe –and also remember- only what they are ‘told’ by the signals they perceive; only what they physically experience.

Our own behavioral flexibility relies on the ability to quickly shift to a new cognitive set (i.e. change our point of view) in response to changing external demands. But the mechanisms of our cognitive flexibility operate even at this single cell level; right from the bottom up.

This is why bottom-up hacking is usually permanent or at least long lasting; top down hacking is usually temporary or transient.

Hopefully now you are beginning to see how environmental signals control the very basics of our perception and our memory through the senses, here at the cellular level. Receptors have awareness of and pick up signals from their environment, and when the shape-changing ‘switch’ is activated it creates a physical sensation for the cell as a pattern of movements and a memory of that pattern. This is the cell’s equivalent of perception and it controls the cell.

To a cell, all thought is motion, because all thought causes signaling and signaling causes motion. Inner reflection is therefore also input to a cell, as are intellect, creativity and all our mental processes, because all thought sends environmental signals to cells in their neighborhood. The cells don’t know where the input comes from, they just respond.

All thought relies on a combination of current input, prediction and memory. So the contents of your thoughts create different chemical environments for the cells, and that is how thought can affect your behavior.

Current input, prediction and memory are all based on perception in the first place. If cells don't perceive anything, they don't respond. If you don't perceive something, there is no cellular motion and whatever it is will not be processed at all.

 


DO IT NOW

 

How we can edit reality - inattentional blindness

The most famous demonstration of inattention blindness was staged in 1999 by Daniel Simons and Christopher Chabris. It involves a game of basketball. Chances are you've seen it or read about a version of this before. If not, without reading the accompanying spoilers on youtube etc, have a look at:

http://www.youtube.com/watch?v=bioyh7Gnskg&feature=related

In this version the task is to count the number of passes made by the team in black

OR

http://www.youtube.com/watch?v=2pK0BQ9CUHk

(but this version has a spoiler, so don't read the introductory text). In this version the task is to count the number of passes made by the team in white. You won't believe your brain.

OR

http://viscog.beckman.illinois.edu/flashmovie/15.php

In this version the task is to count the number of passes made by the team in white. (In this version it may be impossible to get audio, that doesn't matter, but after the first watch you may have to watch it two or three times more to see what’s going on.)

 

Imagination Reloaded

The movements inside cells do things like turn on genes, make new proteins, convert fuel to energy, signal other cells and so on. This process is, literally, how perception works from the bottom up. We now see that ‘input’ to any cell inside the brain actually comes from the cell’s own environment; the space between cells. Information from the outside world ends up in-between cells in the brain via the senses because our skin and eyes and ears are all made of cells doing exactly the same thing –responding to their environment by turning signals into internal movements. Imagination is the translation program that processes all input, regardless of whether it is internal or external. Without it, we could not perceive anything.

We can see how perception turns a series of cellular mechanical movements into automatic responses, and how the cell remembers those responses, but how does it turn them into the familiar abstract concepts our brain can talk and think about?

If we cannot “form a mental image of something”, unconsciously, we literally cannot ‘make sense out of it’ consciously or perceive it clearly. Often we cannot perceive it at all. This has been known about on the physical level for some time and also on the functional one, from experiments with young animals.

On the physical level it seems easy to understand –if one eye of a baby animal is covered and never used, the ability of that eye to see will not develop. Nonuse leads to atrophy and the networks that would have developed break down. On the functional level too, it seems obvious that if a child never hears language, s/he will not develop language. What is new is the evidence that this also occurs on the abstract levels of executive function, abilities and ideas. If we never experience a particular way of thinking about things or doing things, if we never build a concept of it, we will never be able to recognize or understand it.

We may hear the sounds and see the pictures from the outside (percepts) but can’t imagine anything that they associate with (concepts) on the inside, or we may associate percepts with mismatched concepts.

Some people cannot grasp poetry or spirituality, others cannot grasp math or science, in exactly the same way that some creatures can only recognise their dinner if it is moving. If a creature keeps still, many ilifeforms cannot recognize it as food, which shows the value of the 'freeze' response in danger. There are not enough points of similarity between the percept of a dead fly and a frog's known concepts of food for the frog to perceive it as edible. In exactly the same way if here are not enough points of similarity between the percept of a poem and a person's known concepts of language use for the person to perceive it as understandable, it will not be recognized as making any sense.

Materially, if we haven’t built enough of a network to process color or odor or empathy or poetry or calculus, we cannot understand it. There are not enough points of similarity between percepts coming in and the concepts of our database. This can happen for various reasons, which we'll discuss in the problems section later.

Update your paradigm: Imagination’s primary task is not to deal with pretend stuff, it is to deal with real stuff.

Sure, we can use it creatively to invent pretend stuff on purpose just like we can use nuts and bolts to make jewellery, but this is not its main function even though it has until very recently been mistaken for it.

If you are having problems understanding the concepts here it may help you to (literally) replace the word “imagination” with the phrase “image processing” throughout the tutorial (you can do this with Word’s edit/replace function). This will remind you that we are explaining a recently discovered set of mental processes here and not getting caught up in the old ideas about or interpretations of ‘imagination’.

It’s already clear that we need to seriously rethink the popular definition of imagination in the light of new knowledge. We started with:

“The ability to form mental images or concepts not present to the senses”.

But in fact we now see that as soon as a mental image is formed, it becomes immediately ‘present to the senses’, because we cannot form a mental image without sending signals to cells, and these become physical sensations and sensorimotor patterns and memories the moment that cells receive them. That leaves us with the tighter definition:

“The ability to form mental images or concepts”.

And that’s a much more accurate definition for the process neuroscience now knows imagination to be.

You may be thinking that cellular perception doesn’t count as ‘senses’, only things like sight and hearing are ‘proper’ senses. But how do we detect signals coming into cells in the retina or cochlea? –By cellular perception –the signals coming into cells, that become physical sensations and sensorimotor patterns of movement within the cell! All sensory information comes in this way and is stored in short term memory as “the pattern of movements that the cell made in response to its signals”. The cell doesn’t ‘know’ that the information came from ‘out there’; only we know that. For individual cells, all input comes from ‘out there’. None of this information is conscious in the cell, and we can only make it available to the conscious mind via imagination.

 

The eyes are the camcorders of the mind

We know that if we point a camera at a scene ‘out there’ and record footage, and that footage is viewed on a monitor while we do so, the light in the picture on the screen is not coming from ‘out there’, it is coming from the screen or the projector in response to the information in whatever program is running to translate and project the information from the footage.

We are going to have to get used to the idea that the brain is doing very much the same thing.

Research reveals that when we look at the world “out there” the light that we ‘see’ by doesn’t come from ‘out there’ at all; it comes from the same place that the ‘light’ in dreams comes from; where ALL our imagery comes from. We imagine it as the light ‘out there’ because it is an inner response to signals coming in from retinal cells, but in fact all perception is ‘inner’ imagery.

The ‘projection of imagery’ process is identical regardless of the source of its input. We imagine it came from outside or inside depending on a multiple of variables; its intensity, its nature, what networks it has been processed by, what mirror neurons have fired, body signals, and our own interaction with it. Cells don’t know where their input is coming from; they just respond. Our experience of life is determined by how well imagination can process cells’ responses; in other words imagination takes an ongoing educated guess as to what is going on, and how educated a guess (prior experience, amount of input, comparative memory, context probability) determines the clarity of our perception.

The mind doesn't waste time or energy. If there’s only a glimpse of footage, it often can’t imagine what we saw or heard, so it forgets it (this is fortunate, or we’d remember everything that didn’t matter). If we’re doing something routine that has become automatic, imagination often doesn't bother to refresh the page when changes happen (which is how we get confused if a regularly used door or cupboard is moved and keep turning to where it used to be, and it's also how we miss noticing small changes).

fMRI has given us some delightful evidence that conscious memory, perception and empathy are all functions enabled by imagination, as is all conscious thought. Imagination is the root process behind:

 

 

Many essential functions use the same areas of the same networks. The core process is imagination. Neuroimaging of these tasks demonstrates that similar circuits are activated by detailed planning, theory of mind, and episodic recall, and that these same circuits are also part of the "default network" of brain regions which tend to become active when subjects are not given any instructions at all. That is, "undirected" tasks (such as staring at a blank screen) are accompanied by a "highly stereotypical pattern" of brain activity that overlaps strongly with those involved in tasks requiring imagination.

 

Imagination is the ability to form mental images or concepts, period. All of them.

 

Imagination is the process of translating high-dimensional patterns /sequences of mechanical sensorimotor cellular signals into imagery and abstract concepts (behavior into ideas) and manipulating them in order that intelligence can predict and strategize for the most beneficial adaptations, and doing the same thing in reverse (converting abstract concepts into behavior). The inner model acts like a user interface between unconscious processing and conscious thought.

When we build our own models of reality, regardless of what format we use, we base them on the body of knowledge that is already established as far as possible. If we build a model to explain some details of physics, for example, we take into account the physical laws of the universe that we already have plenty of proof for, then we try to fill in the missing bits and unify the ideas.

The mind must build its model right from the beginning of life, and doesn't have time to wait until it understands scientific laws. It uses the only body of knowledge that IT knows as being solid and real and provable; the knowledge of its body and physical experience.

The mind doesn't have time to wait for loads of brain networks to grow before it makes its model either -indeed, without association those networks won't grow anyway! The tools already available to it are all it has; a mirror neuron system that can represent concepts as images, and a body-in-space awareness (proprioception) that associates positions with coordinates through physical experience.

To build the model, the same spatial 'body awareness' process that enables us to know where we are in space is now used to allocate coordinates to abstract concepts and percepts.

We need to understand the process on the concrete material level, as that will help to understand how the same process can be used on a concrete or abstract level. This is a good example of the mind's ability to use the same processes on both concrete and abstract agents.

Neuroscientists have identified three types of cells in N3; called place cells, head direction cells and grid cells.

Place cells are in the hippo and exhibit a high rate of firing whenever an animal is in a specific location in an environment corresponding to the cell's "place field".[11] Place cells map out and encode your location as you move around the environment, lighting up to tell the brain 'you are here' when you pass a specific place. Each place cell in the brain ‘prefers’ a slightly different geographical place. For example, as you wander around your home looking for something that has gone missing, different cells in your hippo will be active at different locations. The area within your environment that triggers a given cell is called its ‘place field’. Across lots of cells, the whole environment can be represented. This is how you can learn to get out of a maze, where things are around town, and how to get back home again, even in the dark.

Knowing as we do how use (exercise) can build up networks, it shouldn’t come as a surprise to us that city cab drivers, (before GPS), had the biggest hippos with more and denser connections than the rest of us and consequently better spatial memory skills.

Place cells are active when you visit a particular area, regardless of which direction you’re facing, so the hippo also contains ‘directional cells’ that allow you to remember which way you are facing and which way you are heading, independent of where you are.

Head direction cells act like a compass. They are active only when your head points in a specific direction within an environment. These neurons fire at a steady rate, but show a decrease in firing rate down to a low baseline rate as your head turns away from the preferred direction (usually returning to baseline when facing about 45° away from this direction).

These cells are found in network 3 and areas where N3 interfaces with N6, including the thalamus striatum and entorhinal cortex.[12] Head direction cells are not sensitive to geomagnetic fields (i.e. they are not "magnetic compass" cells), and are neither purely driven by nor are independent of sensory input.

Grid cells; the third type of cells in N3, use the mind's 'inner model' as a grid-like pattern akin to how we use latitude and longitude for navigation. The firing fields of each grid cell portray a remarkable hexagonal pattern of regular triangles covering the entirety of the person’s environment.

 

Grid cells were discovered in 2005. To achieve the picture above, an electrode capable of recording the activity of an individual neuron was implanted in the dorsomedial entorhinal cortex of a rat, and recordings were made as the rat moved around freely by itself in an open arena. For a grid cell, a dot was placed at the location of the rat's head every time the neuron fired. As illustrated in the picture below, these dots built up over time to form a set of small clusters, and the clusters form the vertices of a grid of equilateral triangles.[13]

 

This regular triangle-pattern is what distinguishes grid cells from other types of cells that show spatial firing correlates.

The arrangement of spatial firing fields all at equal distances from their neighbors led to a hypothesis that these cells encode a cognitive representation of space [13]. The discovery also suggested a mechanism for dynamic computation of self-position based on continuously updated information about position and direction.

What makes grid cells especially interesting is that the regularity in grid spacing does not derive from any regularity in the external environment or in the sensory input available to an animal. Such a pattern of symmetric receptive fields could not result from external sensory input alone but must also be due to pattern generation within the brain itself.

Grid cells use the inner model; the abstract spatial structure that is constructed inside the brain and superimposed on every context by the brain irregardless of sensory input or actual features of the environment; categorising everything according to its location on an imagined 3D grid.

In repeated exposures to the same environment, the grid cells fire at the identical positions, suggesting that grid cells construct a stable map of the environment based on N3's inner model. The inner model is activated in a universal manner across environments, regardless of the environment's particular landmarks, suggesting that the same neural model is applied everywhere. The grid associates closely with self-motion cues because it forms instantaneously in a novel environment and is not perturbed by removal of visual cues. Because the inner model also associates with eidetic core concepts this allows percept to be matched with concept, experience to be translated into meaning, instantly.

Because of its gridlike nature, the model is potentially infinite and can represent places not visited as well as places that have been visited.

 

Hardware turns into software when code is given meaning.

All that needs to happen to apply the same process to hardware or software is to give them associated meaning. On the hardware level, the concept 'looking back' means turning the head around and looking behind you (a concrete concept). On the abstract level the same concept is given the meaning, “looking back into the past”(an abstract concept). All main abstract concepts are based at root on association with physical movements related to animal behaviors.

The existence of a single neural model that can be applied anywhere is efficient and avoids the capacity problem of needing separate maps for every spatial context and separate models for different contexts.

If this grid system is fully functional on the concrete level we not only have our own internal GPS system but onboard motion sensor as well. N3 logs all movement within the grid; from the complex vectors involved in playing tennis to the microscopic flicker of an eyelid. This ability to associate points in a mental ‘spatial network’ model to real points in space in both the outside environment and muscular movement of the body reveals the core behind not only the ability for navigation but to the entire structure of perception and interpretation, and all memory storage and recall. Because on the abstract level N3 uses its spatial map to build the inner model as a mind map for the whole of memory and learning for the rest of our lives.

Wait a minute, I hear you think, -how can all of our diverse areas of memory and knowledge possibly fit into one mind map? What’s more, what has physical movement and spatial awareness got to do with remembering the factual information needed to pass exams, or remembering someone’s name, or recognising your relatives, or how to spell acetylcholinesterase? The answer is the process of imagination, and we'll explore this later in this tutorial. For now, it's time to take a look at some of the things that can get in the way of N3's system.

 

 



Última actualización el Lunes 29 de Mayo de 2017 13:14