English (United Kingdom)French (Fr)Russian (CIS)Espa
Home Library Methods & Technology Methods & Technology Intro - Part II: Technology - Diagnosis & Investigation
Methods & Technology Intro - Part II: Technology - Diagnosis & Investigation PDF Print E-mail
User Rating: / 0
PoorBest 
Neurohacking - Methods & Technology
Written by NHA   
Sunday, 28 February 2010 03:49
Article Index
Methods & Technology Intro - Part II: Technology
Diagnosis & Investigation
Damage Repair & Prevention
Augmentation & Exploration
All Pages

 

 

Diagnosis & Investigation

 

One aim of neurohacking is to be able to understand the brain mechanisms behind behavior, and to that end it is necessary to study the brain. Once upon a time the microscope was all we had at our disposal to study brain tissue and, unsurprisingly, not very much was revealed! Even the reality of synapses, the tiny gaps between brain cells, although theorised, had to wait until the advent of electron microscopes for verification in the 1950s.

In the 1970s and 80s, scientists used radioactive material and dead brains to reveal that certain areas of the brain might be involved in different tasks, but now today’s technology [scanning] allows us to measure accurately which small areas of the brain are active during which behavioral tasks, and to do so in real time.

 

(1927)

Angiography

Or arteriography is a medical imaging technique in which an X-ray picture is taken to visualize the inner opening of blood filled structures, including arteries and veins. Angiograms require the insertion of a catheter into a peripheral artery, and because blood has the same radiodensity as the surrounding tissues, a radiocontrast agent (which absorbs X-rays) is added to the blood to make angiography visualization possible. The X-ray images may be taken as either still images, displayed on a fluoroscope or film, useful for mapping an area. Alternatively, they may be motion images, usually taken at 30 frames per second, which also show the speed of blood (actually the speed of radiocontrast within the blood) traveling within the blood vessel.

 

Chemicals and electricity

Just as the normal sensory input to the brain can be bypassed with electrical or chemical stimulation, so the normal behavioral consequences of stimulation can be pre-empted, by using electrical and chemical activity. Chemicals and electricity can be used to stimulate the brain, and they can also be used to measure the activity of the brain. Blood, urine and cerebrospinal fluid can all be sampled and examined to see if they contain specific chemicals. The chemicals are of two types. Some chemicals are products secreted by the brain to influence other parts of the body, e.g., hormones and neurotransmitters. Other chemicals, known as breakdown products, or metabolites, give an indication of which chemicals have been used in the brain, just as wrappers in the bin give an indication of what someone has been eating. The breakdown products provide two pieces of information. They reveal which chemicals have been used and also, by their quantity, the extent to which they have been used.

 

To record chemical activity within the brain requires two very fine tubes or cannulae [singular cannula], one within the other, to be inserted through a hole in the skull. The inner cannula carries a salt solution into the brain and the outer cannula carries the salt solution out of the brain. At the tip of the two cannulae is some special tubing, called dialysis tubing, which allows chemicals in the fluid surrounding it to enter the cannulae. This happens because the salt solution entering the brain contains none of the chemicals of interest surrounding the cells of the brain. Those chemicals in the fluid surrounding the dialysis tubing are all moving and some just happen to move, to diffuse, across the dialysis tubing into the salt solution. The salt solution that leaves the brain also contains whatever chemicals have been picked up from within the brain. The process of collecting chemicals from within the brain using dialysis tubing is called microdialysis. Chemical analysis can then be applied to the outcoming fluid to determine which chemicals are present around the dialysis tubing.

 

Cranial ultrasound uses reflected sound waves to produce pictures of the brain and the inner fluid chambers (ventricles) through which cerebrospinal fluid (CSF) flows. Cranial ultrasound may be done to visualize brain masses during brain surgery.

Ultrasound waves cannot pass through bones; therefore, an ultrasound to evaluate the brain cannot be done unless the skull has been surgically opened.

 

The brain can be stimulated by electricity, but the brain also generates electricity in quantities that vary with its activity. Electrical activity can be recorded using electrodes. A pair of electrodes, attached close together on the scalp with an electrically conducting gel, can be used to detect the cumulative effect of the tiny voltage changes [a few microvolts] generated by nearby neurons. The signal is amplified and fed into a computer. It is easier to visualise what the computer does by describing the older apparatus the computer has replaced. So, rather than the signal being amplified and fed to a computer, consider that the signal was amplified and fed to a device, a galvanometer, which converted the signal into the sideways movements of a pen. A strip of paper moved at a constant speed under the pen. As the current measured by the electrodes changed, so the pen moved back and forth across the paper, creating a long wiggly line, a brain wave. Usually there are several pairs of electrodes on the scalp, each feeding a different pen, or channel. There can be up to 40 channels on one multichannel recorder. The result, from the pen or from the computer, is the Electroencephalogram, or EEG.

The electrical signals can be accurately timed. If a stimulus, e.g., a sound, is applied at a known time in relation to the EEG recording, then the effect of the stimulus on the EEG can be deduced. This process underlies the “event-related potential” [ERP]. In fact, the deduction requires a lot of computation, and repeated applications of the stimulus. Those electrical events that usually accompany the stimulus are enhanced, while those that only occasionally accompany the stimulus are reduced. Eventually, an averaged response emerges from the wealth of electrical signals, and a large, unmistakable, slow wave, the “event-related potential”, becomes clear. The wave is identified by its time in milliseconds after the stimulus, say, 200ms, and whether it is in the negative [N] i.e., downwards, or positive [P] i.e., upwards direction. The P200 is a fairly standard [i.e., regularly observed] ERP. ERPs have been used to investigate many types of cognitive processes, including memory, language and attention, face recognition in children, as well as degenerative disorders, such as Alzheimer’s disease.

 

The EEG has the same problems associated with the skull as those that disperse and attenuate an electrical impulse used to stimulate the brain; the skull also disperses and attenuates the weal electrical signals generated by the brain.

The electrical interference of the skull can be minimised by using microelectrodes.

 

Microelectrodes can be used to record the electrical activity of relatively small groups of cells, or even of individual neurons, inside the brain. Micropipettes [sometimes called glass microelectrodes] have such fine tips that they can also be inserted into individual brain cells and used to record their activity.

 

These methods of recording electrical activity can be used in real time and while the subject is active and moving around. They can also detect changes occurring in a very short space of time, milliseconds, which is close to the timescale in which neurons operate.

 

There is an additional way to measure the electrical activity of the brain, but the device that is able to measure it requires the subject to keep very still. Any electrical current necessarily generates a magnetic field, and this is true of nerve cells even though the electric current and the magnetic field they generate is tiny. However, by placing an array of supercooled, highly sensitive detectors around the skull, it is not only possible to detect but also to identify the source of the magnetic fields generated by activity in the brain: this is the principle of the magnetoencephalogram, or MEG. The MEG allows the localised areas of electrical activity that arise in the brain in response to particular stimuli to be identified.

 

There are other devices that measure brain activity and that require the subject to be very still, but these devices do not measure electrical activity; they measure blood flow. The brain requires a continuous supply of blood to provide both energy, in the form of glucose, and oxygen. The brain can neither store these substances nor use alternatives. About 20% of the heart’s output of blood goes to the brain. The progress of the blood through the brain though is not a flood, but a carefully controlled irrigation. The network of blood vessels in the brain is under very sensitive, localised control, such that each area of the brain receives only the amount of blood that it requires: all areas of the brain do not receive equal amounts. Those areas that receive the most blood during a particular task [e.g., presentation of an auditory stimulus, thought or manual manipulation] and hence are in receipt of the most glucose and oxygen, are deemed to be the most active. The size of the area of activity and the intensity of the activity though should be taken only as a guide to the importance of that area for whatever task is being assessed. The relative importance of the crowd and the players at a sports match, compared to their size and energy consumption, is a useful analogy here.


 

(early 1970s)

Computerized axial tomography (CAT or CT scanning) became available. Ever more detailed anatomic images of the brain became available for diagnostic and research purposes. Soon after the introduction of CAT in the early 1980s, the development of radioligands enabled single photon emission computed tomography (SPECT) and positron emission tomography (PET) scans of the brain.

Two devices measure blood flow and distribution within the brain; positron emission tomography [PET] and magnetic resonance imaging [MRI].


 

(early 1980s)

The PET scan –Radioactive material emits small, high-energy particles called positrons. Each emitted positron interacts with a nearby electron, resulting in the annihilation of them both and the production of gamma rays [a form of electromagnetic radiation like X-rays, but of higher energy]. The gamma rays disperse in equal and opposite directions from the point of positron-electron annihilation and can be detected by suitable sensors. A computer can reconstruct the source of the gamma rays using information about which sensors are activated and when. The process of reconstruction is called tomography –the T in PET. A patient or participant has a small amount of radioactive material injected into their bloodstream. The material is transported in the blood around the body and into the brain. The areas of the brain that command the greater volume of blood produce the most gamma rays and it is these areas that are computed and displayed by the PET scan.

 

(early 1980s)

The MRI scan –Any charged particle that spins has magnetic resonance. Protons have a positive charge and they spin all the time. These properties mean that all atoms and molecules have magnetic resonance because they contain protons. The technique depends not only on the fact that different molecules have different magnetic resonance. Two molecular components of the blood are particularly interesting in this regard; they are variants on haemoglobin, the molecule that makes blood red. Haemoglobin with oxygen attached is called oxyhaemoglobin, while haemoglobin that has no oxygen attached to it is called deoxyhaemoglobin. The magnetic resonance of deoxyhaemoglobin is different from that of oxyhaemoglobin. When blood is diverted to particular areas of the brain, the ratio of oxy- to deoxyhaemoglobin will change, and the sensors can detect this change. Computers produce an image of where the change in ratio occurs.

The MRI scan, tuned to detect the magnetic resonance of hydrogen, generates high-resolution, three-dimensional images of the brain. These images reveal the anatomy and structure of the brain in some detail. This is structural MRI.

One advantage of MRI over PET is that it does not require the patient to be exposed to radioactive substances. The absence of any potentially dangerous exposure means that, unlike the situation with PET, images of the same person performing the same task can be repeatedly produced. However, MRI scanners are noisy and the patient is confined to a small space, having to use mirrors to see out of the scanner, and has to remain very still. The noise, constriction and motionlessness make for an uncomfortable experience. However, advances in technology have meant that a wider machine is now available.

 

(1985)

Transcranial magnetic stimulation (TMS) is a recent addition in brain imaging. In TMS, a coil is held near a person's head to generate magnetic field impulses that stimulate underlying brain cells to make someone perform a specific action. Using this in combination with MRI, the researcher can generate maps of the brain performing very specific functions. Instead of asking a patient to tap his or her finger, the TMS coil can simply "tell" his or her brain to tap his or her finger. The images received from this technology are slightly different from the typical MRI results, and they can be used to map any subject's brain by monitoring up to 120 different stimulations. This technology has been used to map both motor processes and visual processes. In addition to fMRI, the activation of TMS can be measured using EEG) or near infrared spectroscopy (NIRS).

Repetitive transcranial magnetic stimulation is known as rTMS and can produce longer lasting changes. Numerous small-scale pilot studies have shown it could be a treatment tool for various neurological conditions.


 

TMS and rTMS are used in different ways for different purposes.

Single or paired pulse TMS. The pulse(s) causes neurons in the neocortex under the site of stimulation to depolarise and discharge an action potential. If used in the primary motor cortex it produces muscle activity referred to as a motor-evoked potential (MEP) which can be recorded on EMG. If used on the occipital cortex, “phosphenes” (flashes of light) might be detected by the subject. In most other areas of the cortex, the participant does not consciously experience any effect, but his or her behaviour may be slightly altered (e.g. slower reaction time on a cognitive task), or changes in brain activity may be detected using PET or fMRI. Effects resulting from single or paired pulses do not outlast the period of stimulation.


 

Repetitive TMS (rTMS) produces effects which last longer than the period of stimulation. rTMS can increase or decrease the excitability of corticospinal or corticocortical pathways depending on the intensity of stimulation, coil orientation and frequency of stimulation. The mechanism of these effects is not clear although it is widely believed to reflect changes in synaptic efficacy akin to long term potentiation (LTP) and long term activity-dependent reduction.


 

(1990)

Blood-oxygen-level dependent (BOLD) is the MRI contrast of blood deoxyhemoglobin. Almost all current fMRI research uses BOLD as the method for determining where activity occurs in the brain as the result of various experiences


 

(1990)

functional magnetic resonance imaging (fMRI) relies on the paramagnetic properties of oxygenated and deoxygenated hemoglobin to see images of changing blood flow in the brain associated with neural activity. This allows images to be generated that reflect which brain structures are activated (and how) during performance of different tasks. Functional imaging enables the processing of information by centers in the brain to be visualized directly. Such processing causes the involved area of the brain to increase metabolism and "light up" on the scan. (The computer effectively subtracts the images produced when the participant is not performing the task from the images produced when they are.) The difference in blood flow is usually represented as a color scale on images and indicates where the brain carries out the particular function.

Most fMRI scanners allow subjects to be presented with different visual images, sounds and touch stimuli, and to make different actions such as pressing a button or moving a joystick. Consequently, fMRI can be used to reveal brain structures and processes associated with perception, thought and action. The resolution of fMRI is about 2-3 millimeters at present (2010), limited by the spatial spread of the hemodynamic response to neural activity. It has largely superseded PET for the study of brain activation patterns. PET, however, retains the significant advantage of being able to identify specific brain receptors associated with particular neurotransmitters through its ability to image radiolabelled receptor "ligands" (receptor ligands are any chemicals that stick to receptors).

As well as research on healthy subjects, fMRI is increasingly used for the medical diagnosis of disease. Because fMRI is exquisitely sensitive to blood flow, it is extremely sensitive to early changes in the brain.

 

Diffuse optical imaging (DOI) or diffuse optical tomography (DOT) is an imaging modality which uses near infrared light to generate images of the body. The technique measures theoptical absorption of hemoglobn and relies on the absorption spectrum of haemoglobin varying with its oxygenation status.


 

Event-related optical signal (EROS) is a brain-scanning technique which uses infrared light through optical fibers to measure changes in optical properties of active areas of the cerebral cortex. Whereas techniques such as DOT and near infrared spectroscopy (NIRS) measure optical absorption of haemoglobin, and thus are based on blood flow, EROS takes advantage of the scattering properties of the neurons themselves, and thus provides a much more direct measure of cellular activity. EROS can pinpoint activity in the brain within millimeters (spatially) and within milliseconds (temporally). Its biggest downside is the inability to detect activity more than a few centimeters deep. EROS is a new, relatively inexpensive technique that is non-invasive to the test subject.


 

X-ray CT

Scans that use X-rays are generally just referred to as CT [computerised tomography] scans and, although common, the resultant images are two-dimensional, and of comparatively low resolution. PET and fMRI scans can locate brain activity, whereas MRI and CT scans can only show structure.

As we said above, each molecule has its own magnetic resonance, which means that magnetic resonance can also be used to locate particular molecules within the living brain. Localised magnetic resonance spectroscopy can be used for the detection of specific molecules such as neurotransmitters in a small volume within the brain.

 

T rays

Sending tight bunches of electrons at nearly the speed of light through a magnetic field causes the electrons to radiate T-rays at a trillion cycles per second—the terahertz frequency that gives T-rays their name and that makes them especially useful for investigating biological molecules.

Invisible T-rays bear comparison with radio waves, microwaves, infrared light and X-rays. But unlike those much-used forms of radiated energy, up until recently T-rays have been little exploited—in part because no one knew how to make them bright enough.

T-rays are electromagnetic radiation of the safe, non-ionizing kind. They can pass through clothing, paper, cardboard, wood, masonry, plastic and ceramics. They can penetrate fog and clouds. Their wavelength—shorter than microwaves, longer than infrared—corresponds revealingly with biomolecular vibrations.

Non-ionizing radiation does not damage tissues & DNA, unlike X rays. Some frequencies of terahertz radiation can penetrate several millimeters of tissue with low water content (e.g. fatty tissue) and reflect back. Terahertz radiation can also detect differences in water content and density of a tissue. Such methods could allow effective detection of some conditions with a safer and less invasive or painful imaging system. Some frequencies of terahertz radiation can also be used for 3D imaging of teeth and may be more accurate and safer than conventional X-ray imaging in dentistry. Spectroscopy in terahertz radiation could provide new information in chemistry/biochemistry.


 

Most recent developments in imaging have come via computer software and how the data is handled.


 

(mid 1990s)

DTI/DSI (Diffusion tensor imaging/ Diffusion spectrum imaging)

How’s it done?
Diffusion spectrum imaging, developed by neuroscientist Van Wedeen at Massachusetts General Hospital, analyzes magnetic resonance imaging (MRI) data in new ways, allowing the mapping of nerve fibers that make up networks. The graphic explanation below was composed by Olaf Sporns:

 

 


 

(1) High-resolution T1 weighted and diffusion spectrum MRI (DSI) is acquired. DSI is represented with a zoom on the axial slice of the reconstructed diffusion map, showing an orientation distribution function at each position represented by a deformed sphere whose radius codes for diffusion intensity. Blue codes for the head-feet, red for left-right, and green for anterior-posterior orientations. (2) White and gray matter segmentation is performed from the T1-weighted image. (3a) 66 cortical regions with clear anatomical landmarks are created and then (3b) individually subdivided into small regions of interest (ROIs) resulting in 998 ROIs. (4) Whole brain tractography is performed providing an estimate of axonal trajectories across the entire white matter. (5) ROIs identified in step (3b) are combined with result of step (4) in order to compute the connection weight between each pair of ROIs. The result is a weighted network of structural connectivity across the entire brain.


fMRI and DSI imagery are closely related and this new technique can measure a significant correlation between brain anatomy and brain dynamics.

 

(2006)

optogenetics

Thanks to molecular tinkering and new fiber-optic devices that deliver light deep into the brain via an implant, researchers can use optogenetics to study the effect of neural stimulation on different behaviors in live animals. To make neurons sensitive to light, scientists genetically engineer them to carry a protein adapted from green algae. When the modified neuron is exposed to light, via the fiber-optic implant, the protein triggers electrical activity within the cell that spreads to the next neuron in the circuit. The technology allows scientists to control neural activity much more precisely than previous methods, which generally involved delivering electrical current through an electrode.

Optogenetics is allowing scientists to tackle major unanswered questions about the brain, including the role of specific brain regions in the formation of memory, the process of addiction, and the transition from sleep to wakefulness. Scientists are also using optogenetics to study depression, another disease that can be treated with electrical stimulation.


 

(2009)

Myth-busting: Can brain scans read your mind?

In the last few years, patterns in brain activity have been used to successfully predict what pictures people are looking at, their location in a virtual environment or a decision they are poised to make. The most recent results show that researchers can now recreate moving images that volunteers are viewing - and even make educated guesses at which event they are remembering. Some researchers, and some new businesses, are banking on fMRI to reveal hidden thoughts, such as lies, truths or deep desires.

This is very foolish of them because scanning is NOT capable of mind reading (because the program can't decode any images or memories that it hasn't already been trained on). You can't stick somebody in a scanner and know what they're thinking. You can record what they're thinking and then tell if they are thinking about the same sort of thing when you record them again.

One of the most dubious ways new imaging developments are being used by society is in the courtroom through claims of 'mind reading' and the detection of mental states. While courts accepting traditional GSR lie detector, or polygraph, tests was bad enough (they are just too unreliable to be certain) there are now a number of companies claiming to use neuroscience methods to detect lies. Some of these methods involve EEG, which has already been used in two forensic techniques which have appeared in courtrooms: brain fingerprinting and brain electrical oscillations signature (BEOS).

Brain fingerprinting claims to test for 'guilty knowledge,' or memory of a kind that only a guilty person could have, which is completely unrealistic with current tech (2010). Other forms of hypothetical guilt detection, using functional magnetic resonance imaging (fMRI), are based on the assumption that lying and truth-telling are associated with distinctive activity in different areas of the brain. This is also extremely unreliable. The premature use of insufficiently accurate technology in justice issues is a dangerous development and it could lead to serious problems.

 

 



Last Updated on Saturday, 10 March 2012 11:23