Está en la página 1de 21

Reconstructing visual experiences from brain activity evoked by natural movies.

Shinji Nishimoto, An T. Vu, Thomas Naselaris, Yuval Benjamini, Bin Yu & Jack L. Gallant. Current Biology, published online September 22, 2011. Quantitative modeling of human brain activity can provide crucial insights about cortical representations and can form the basis for brain decoding devices. Recent functional magnetic resonance imaging (fMRI) studies have modeled brain activity elicited by static visual patterns and have reconstructed these patterns from brain activity. However, blood oxygen level-dependent (BOLD) signals measured via fMRI are very slow, so it has been difficult to model brain activity elicited by dynamic stimuli such as natural movies. Here we present a new motion-energy encoding model that largely overcomes this limitation. The model describes fast visual information and slow hemodynamics by separate components. We recorded BOLD signals in occipitotemporal visual cortex of human subjects who watched natural movies and fit the model separately to individual voxels. Visualization of the fit models reveals how early visual areas represent the information in movies. To demonstrate the power of our approach, we also constructed a Bayesian decoder by combining estimated encoding models with a sampled natural movie prior. The decoder provides remarkable reconstructions of the viewed movies. These results demonstrate that dynamic brain activity measured under naturalistic conditions can be decoded using current fMRI technology. Simple example of reconstruction The left clip is a segment of a Hollywood movie trailed that the subject viewed while in the magnet. The right clip shows the reconstruction of this segment from brain activity measured using fMRI. The procedure is as follows: [1] Record brain activity while the subject watches several hours of movie trailers. [2] Build dictionaries (regression model; see below) to translate between the shapes, edges and motion in the movies and measured brain activity. A separate dictionary is constructed for each of several thousand points in the brain at which brain activity was measured. (For experts: our success here in building a movie-to-brain activity encoding model that can predicts brain activity to arbitrary novel movie inputs was one of the keys of this study) [3] Record brain activity to a new set of movie trailers that will be used to test the quality of the dictionaries and reconstructions. [4] Build a random library of ~18,000,000 seconds of video downloaded at random from YouTube (that have no overlap with the movies subjects saw in the magnet). Put each of these clips through the dictionaries to generate predictions of brain activity. Select the 100 clips whose predicted activity is most similar to the observed brain activity. Average those clips together. This is the reconstruction. Reconstruction for different subjects

This video is organized as folows: the movie that each subject viewed while in the magnet is shown at upper left. Reconstructions for three subjects are shown in the three rows at bottom. All these reconstructions were obtained using only each subject's brain activity and a library of 18 million seconds of random YouTube video that did not include the movies used as

stimuli. The reconstruction at far left is the Average High Posterior (AHP). The reconstruction in the second column is the Maximum a Posteriori (MAP). The other columns represent less likely reconstructions. The AHP is obtained by simply averaging over the 100 most likely movies in the reconstruction library. These reconstructions show that the process is very consistent, though the quality of the reconstructions does depend somewhat on the quality of brain activity data recorded from each subject. Frequently Asked Questions About This Work
Could you give a simple outline of the experiment? The goal of the experiment was to design a process for decoding dynamic natural visual experiences from human visual cortex. More specifically, we sought to use brain activity measurements to reconstruct natural movies seen by an observer. First, we used functional magnetic resonance imaging (fMRI) to measure brain activity in visual cortex as a person looked at several hours of movies. We then used these data to develop computational models that could predict the pattern of brain activity that would be elicited by any arbitrary movies (i.e., movies that were not in the initial set used to build the model). Next, we used fMRI to measure brain activity elicited by a second set of movies that were completely distinct from the first set. Finally, we used the computational models to process the elicited brain activity, in order to reconstruct the movies in the second set of movies. This is the first demonstration that dynamic natural visual experiences can be recovered from very slow brain activity recorded by fMRI. Can you give an intuitive explanation of movie reconstruction? As you move through the world or you watch a movie, a dynamic, ever-changing pattern of activity is evoked in the brain. The goal of movie reconstruction is to use the evoked activity to recreate the movie you observed. To do this, we create encoding models that describe how movies are transformed into brain activity, and then we use those models to decode brain activity and reconstruct the stimulus. Can you explain the encoding model and how it was fit to the data?

To understand our encoding model, it is most useful to think of the process of perception as one of filtering the visual input in order to extract useful information. The human visual cortex consist of billions of neurons. Each neuron can be viewed as a filter that takes a visual stimulus as input, and produces a spiking response as output. In early visual cortex these neural filters are selective for simple features such as spatial position, motion direction and speed. Our motion-energy encoding model describes this filtering process. Currently the best method for measuring human brain activity is fMRI. However, fMRI does not measure neural activity directly, but rather measures hemodynamic changes (i.e. changes in blood flow, blood volume and blood oxygenation) that are caused by neural activity. These hemodynamic changes take place over seconds, so they are much slower than the changes that can occur in natural movies (or in the individual neurons that filter those movies). Thus, it has previously been thought impossible to decode dynamic information from brain activtiy recorded by fMRI. To overcome this fundamental limitation we use a two stage encoding model. The first stage consists of a large collection of motion-energy filters that span a range of positions, motion directions and speeds as the underlying neurons. This stage models the fast responses in the early visual system. The output from the first stage of the model is fed into a second stage that describes how neural activity affects hemodynamic activity in turn. The two stage processing allows us to model the relationship between the fine temporal information in the movies and the slow brain activity signals measured using fMRI. Functional MRI records brain activity from small volumes of brain tissue called voxels (here each voxel was 2.0 x 2.0 x 2.5 mm). Each voxel represents the pooled activity of hundreds of thousands of neurons. Therefore, we do not model each voxel as a single motionenergy filter, but rather as a bank of thousands of such filters. In practice fitting the encoding model to each voxel is a straightforward regression problem. First, each movie is processed by a bank of nonlinear motionenergy filters. Next, a set of weights is found that optimally map the filtered movie (now represented as a vector of about 6,000 filter outputs) into measured brain activity. (Linear summation is assumed in order to simplify fitting.)

How accurate is the decoder? A good decoder should produce a reconstruction that a neutral observer judges to be visually similar to the viewed movie. However, it is difficult to quantify human judgments of visual similarity. In this paper we use similarity in the motion-energy domain. That is, we quantify how much of the spatially localized motion information in the viewed movie was reconstructed. The accuracy of our reconstructions is far above chance. Other studies have attempted reconstruction before. How is your study different? Previous studies showed that it is possible to reconstruct static visual patterns (Thirion et al., 2006 Neuroimage; Miyawaki et al., 2008 Neuron), static natural images (Naselaris et al., 2009 Neuron) or handwriting digits (van Gerven et al. 2010 Neural Computation). However, no previous study has produced reconstructions of dynamic natural movies. This is a critical step toward obtaining reconstructions of internal states such as imagery, dreams and so on. Why is this finding important? From a basic science perspective, our paper provides the first quantitative description of dynamic human brain activity during conditions simulating natural vision. This information will be important to vision scientists and other neuroscientists. Our study also represents another important step in the development of brain-reading technologies that could someday be useful to society. Previous brain-reading approaches could only decode static information. But most of our visual experience is dynamic, and these dynamics are often the most compelling aspect of visual experience. Our results will be crucial for developing brain-reading technologies that can decode dynamic experiences. How many subjects did you run? Is there any chance that they could have cheated? We ran three subjects for the experiments in this paper, all co-authors. There are several technical considerations that made it advantageous to use authors as subjects. It takes several hours to acquire sufficient data to build an accurate motion-energy encoding model for each subject, and naive subjects find it difficult to stay still and alert for this long. Authors are motivated to be good subjects, to their data are of high quality. These high quality data enabled us to build detailed and accurate models for each individual subject. There is no reason to think that the use of authors as subjects weakens the validity of the study. The experiment focuses solely on the early part of the visual system, and this part of the brain is not heavily modulated by intention or prior knowledge. The movies used to develop encoding models for each subject and those used for decoding were completely separate, and there no plausible way that a subject could have changed their own brain activity in order to improve decoding. Many fMRI studies use much larger groups of subjects, but they collect much less data on each subject. Such studies tend to average over a lot of the individual variability in the data, and the results provide a poor description of brain activity in any individual subject. What are the limits on brain decoding? Decoding performance depends on the quality of brain activity measurements. In this study we used functional MRI (fMRI) to measure brain activity. (Note that fMRI does not actually measure the activity of neurons. Instead, it measures blood flow consequent to neural activity. However, many studies have shown that the blood flow signals measured using fMRI are generally correlated with neural activity.) fMRI has relatively modest spatial and temporal resolution, so much of the information contained in the underlying neural activity is lost when using this technique. fMRI measurements are also quite variable from trial-to-trial. Both of these factors limit the amount of information that can be decoded from fMRI measurements. Decoding also depends critically on our understanding of how the brain represents information, because this will determine the quality of the computational model. If the encoding model is poor (i.e., if it does a poor job of prediction) then the decoder will be inaccurate. While our computational models of some cortical visual areas perform well, they do not perform well when used to decode activity in other parts of the brain. A better understanding of the processing that occurs in parts of the brain beyond visual cortex (e.g. parietal cortex, frontal cortex) will be required before it will be possible to decode other aspects of human experience.

What are the future applications of this technology? This study was not motivated by a specific application, but was aimed at developing a computational model of brain activity evoked by dynamic natural movies. That said, there are many potential applications of devices that can decode brain activity. In addition to their value as a basic research tool, brain-reading devices could be used to aid in diagnosis of diseases (e.g., stroke, dementia); to assess the effects of therapeutic interventions (drug therapy, stem cell therapy); or as the computational heart of a neural prosthesis. They could also be used to build a brain-machine interface. Could this be used to build a brain-machine interface (BMI)? Decoding visual content is conceptually related to the work on neural-motor prostheses being undertaken in many laboratories. The main goal in the prosthetics work is to build a decoder that can be used to drive a prosthetic arm or other device from brain activity. Of course there are some significant differences between sensory and motor systems that impact the way that a BMI system would be implemented in the two systems. But ultimately, the statistical frameworks used for decoding in the sensory and motor domains are very similar. This suggests that a visual BMI might be feasible. At some later date when the technology is developed further, will it be possible to decode dreams, memory, and visual imagery? Neuroscientists generally assume that all mental processes have a concrete neurobiological basis. Under this assumption, as long as we have good measurements of brain activity and good computational models of the brain, it should be possible in principle to decode the visual content of mental processes like dreams, memory, and imagery. The computational encoding models in our study provide a functional account of brain activity evoked by natural movies. It is currently unknown whether processes like dreaming and imagination are realized in the brain in a way that is functionally similar to perception. If they are, then it should be possible to use the techniques developed in this paper to decode brain activity during dreaming or imagination. At some later date when the technology is developed further, will it be possible to use this technology in detective work, court cases, trials, etc? The potential use of this technology in the legal system is questionable. Many psychology studies have now demonstrated that eyewitness testimony is notoriously unreliable. Witnesses often have poor memory, but are usually unaware of this. Memory tends to be biased by intervening events, inadvertent coaching, and rehearsal (prior recall). Eyewitnesses often confabulate stories to make logical sense of events that they cannot recall well. These errors are thought to stem from several factors: poor initial storage of information in memory; changes to stored memories over time; and faulty recall. Any brain-reading device that aims to decode stored memories will inevitably be limited not only by the technology itself, but also by the quality of the stored information. After all, an accurate read-out of a faulty memory only provides misleading information. Therefore, any future application of this technology in the legal system will have to be approached with extreme caution. Will we be able to use this technology to insert images (or movies) directly into the brain? Not in the foreseeable future. There is no known technology that could remotely send signals to the brain in a way that would be organized enough to elicit a meaningful visual image or thought. Does this work fit into a larger program of research? One of the central goals of our research program is to build computational models of the visual system that accurately predicts brain activity measured during natural vision. Predictive models are the gold standard of computational neuroscience and are critical for the long-term advancement of brain science and medicine. To build a computational model of some part of the visual system, we treat it as a "black box" that takes visual stimuli as input and generates brain activity as output. A model of the black box can be estimated using statistical tools drawn from classical and Bayesian statistics, and from machine learning. Note that this reverse-

engineering approach is agnostic about the specific way that brain activity is measured.

One good way to evaluate these encoding models is construct a corresponding decoding model, and then assess its performance in a specific task such as movie reconstruction. Why is it important to construct computational models of the brain?
The brain is an extremely complex organ and many convergent approaches are required to obtain a full understanding of its structure and function. One way to think about the problem is to consider three different general goals of research in systems/computational neuroscience. (1) The first goal is to understand how the brain is divided into functionally distinct modules (e.g., for vision, memory, etc.). (2) The second goal, contingent on the first, is to determine the function of each module. One classical approach for investigating the function of a brain circuit is to characterize neural responses at a quantitative computational level that is abstracted away from many of the specific anatomical and biophysical details of the system. This helps make tractable a problem that would otherwise seem overwhelmingly complex. (3) The third goal, contingent on the first two, is to understand how these specific computations are implemented in neural circuitry. A byproduct of this model-based approach is that it has many specific applications, as described above. Can you briefly explain the function of the parts of the brain examined here? The human visual system consists of several dozen distinct cortical visual areas and sub-cortical nuclei, arranged in a network that is both hierarchical and parallel. Visual information comes into the eye and is there transduced into nerve impulses. These are sent on to the lateral geniculate nucleus and then to primary visual cortex (area V1). Area V1 is the largest single processing module in the human brain. Its function is to represent visual information in a very general form by decomposing visual stimuli into spatially localized elements. Signals leaving V1 are distributed to other visual areas, such as V2 and V3. Although the function of these higher visual areas is not fully understood, it is believed that they extract relatively more complicated information about a scene. For example, area V2 is thought to represent moderately complex features such as angles and curvature, while high-level areas are thought to represent very complex patterns such as faces. The encoding model used in our experiment was designed to describe the function of early visual areas such as V1 and V2, but was not meant to describe higher visual areas. As one might expect, the model does a good job of decoding information in early visual areas but it does not perform as well in higher areas. Are there any ethical concerns with this type of research? The current technology for decoding brain activity is relatively primitive. The computational models are immature, and in order to construct a model of someone's visual system they must spend many hours in a large, stationary magnetic resonance scanner. For this reason it is unlikely that this technology could be used in practical applications any time soon. That said, both the technology for measuring brain activity and the computational models are improving continuously. It is possible that decoding brain activity could have serious ethical and privacy implications downstream in, say, the 30-year time frame. As an analogy, consider the current debates regarding availability of genetic information. Genetic sequencing is becoming cheaper by the year, and it will soon be possible for everyone to have their own genome sequenced. This raises many issues regarding privacy and the accessibility of individual genetic information. The authors believe strongly that no one should be subjected to any form of brain-reading process involuntarily, covertly, or without complete informed consent.

What are gamma brainwaves? Gamma brainwaves are considered the brains optimal frequency of functioning. Gamma brainwaves are commonly associated with increased levels of compassion, feelings of happiness, and optimal brain functioning. Gamma brainwaves are associated with a conscious awareness of reality and increased mental abilities. Gamma brainwaves range from the frequency of 38 Hz 70 Hz and have a tiny (virtually unnoticeable) amplitude. Gamma brainwaves can be found in virtually every part of the brain. They serve as a binding mechanism between all parts of the brain and help to improve memory and perception. Benefits of increasing gamma brainwaves:

Boosted memory High amounts of gamma brainwaves have been associated with a boosted memory and ability to recall past events. The 40 Hz gamma frequency has been associated with a well-regulated memory. If you are currently struggling with maintaining a great, healthy memory, consider increasing your 40 Hz gamma brainwave. Enhanced perception of reality Gamma brainwaves can provide you with an enhanced overall perception of reality and understanding of consciousness. Because gamma brainwaves can be found in virtually every part of the brain, it allows parts of the brain to communicate. Through their communication, your reality and perception is formed. Binding of senses The gamma brainwave is what allows us to experience: smell, touch, vision, taste, and hearing altogether. It allows our brain to process multiple sensations at the same time and allows us to identify environmental forms of stimulation. It also improves our overall perception of our senses by enhancing our levels of focus. Increased compassion Advanced meditation practices and yogic traditions have associated the gamma brainwave frequency range with a pure state of compassion. Richard Davidson hooked long-time meditators up to an E.E.G. at the University of Wisconsin Madison and found that the more meditation experience a person had, a higher amount of gamma brainwave was displayed. Since most people arent able to cultivate a pure state of compassion like many monks, they may not ever understand or feel the wonderment of the gamma brainwave range. High-level information processing Gamma brainwaves are associated with highlevel information processing in the brain. Basically, the brain is able to operate more efficiently at a higher level. Thoughts are easily processed and the brain is able to easily absorb and understand new information and changes in ones environment. Natural antidepressant The gamma brainwave is a known natural antidepressant. Not only does it increase our level of compassion for others, it boosts our overall levels of happiness. Many people claim that listening to the gamma brainwave while meditating has proved to be extremely effective at completely eliminating their depression. The gamma brainwave decreases during stress, anxiety, and cases of

depression. No wonder that increasing your gamma brainwaves will make you feel much less depressed. In people with depression, the amount of gamma brainwave tends to be much lower than average.

Advanced learning ability Since gamma has been associated with a higher level of information processing, quicker thinking, and an enhanced perception of reality, people with high amounts of gamma brainwaves tend to have an advanced learning ability. People with learning disabilities, ADD, and those under a lot of stress, tend to have a significantly smaller amount of the gamma brainwave than others. Intelligence (I.Q.) Increase The gamma brainwave has been associated with higher than average levels of intelligence. People with lower I.Q.s and learning disabilities tend to have very low amounts of gamma brainwave compared to smarter individuals. Increasing your gamma brainwave, especially 40 Hz, will probably correlate with at least a slight intelligence increase. Positive thoughts Are you a person that always thinks positive and one who has compassion for others? If you already think positively and are relaxed, you probably have high amounts of the gamma brainwave. In people with depression, there is relatively little amounts of the gamma brainwave that can be observed in an E.E.G. If you have depression or are a chronic negative thinker, you may want to really consider naturally increasing your gamma brainwaves. Higher energy levels Higher brainwave frequencies in the beta and gamma brainwave ranges correlate with increased physical and mental energy. Since the gamma brainwave range is among the highest of known brainwave frequencies, it definitely will give your energy level a jolt upwards. If you currently have low amounts of energy, consider increasing your brains gamma brainwaves. High level of focus The mind is extremely focused on just one thought while in the gamma brainwave range. It is important to cultivate a high level of focus in order to efficiently complete tasks and succeed in the world. It is very difficult when you have a learning disability or are lacking in focus to be successful. Sustaining a high level of focus can be done easily by increasing the amount of 40 Hz gamma activity in the brain. Improved perception / consciousness Gamma brainwaves have been linked to improved perception of reality and the ability to be aware of ones consciousness. Gamma brainwaves are very powerful and may feel like quite an awakening to increase if you dont have much natural gamma activity. Advanced meditators have much more gamma activity than the average person which is why it is easy for them to control and understand their state of consciousness.

Who has high amounts of gamma brainwaves?

Advanced Meditators Advanced meditators tend to have a large amount of gamma brainwave activity compared to non-meditators. The amount of gamma brainwave and its amplitude increases as ones ability to go deeper into meditation increases. Though most meditation practices increase the amount of slow brainwaves in the alpha and theta range, the gamma brainwave frequency increases as well. The gamma

brainwave is what allows meditators to distinguish the alpha, and possibly the theta brainwave ranges. As you are able to gain more meditation experience, youll learn to naturally boost your gamma brainwave activity. Research has shown that the more experience you have with meditation, the more gamma brainwave activity youll display.

Peak performers Peak performers tend to have large amounts of gamma brainwave activity compared to others. Though alpha bursts in the left hemisphere has been scientifically proven to be linked to peak performance, gamma brainwave activity is suggested to be essential to performing at an optimal level. If you are interested in manipulating your brainwave patterns to help create a state of peak performance, you may want to try alpha and gamma and be the judge as to which one works better. Ive heard of several brainwave training regimens that claim it is best to use 10 Hz alpha for visualization several hours before your sporting event, then the gamma brainwave around 30 min. 1 hour before your event. The combination of alpha, followed by gamma, is supposed to create a state of peak mental preparation and performance.

Just like any of the other brainwave patterns, too much of a dominant rhythm can cause problems. By no means would it be recommended to increase a brainwave that you already have high levels of. Though it is extremely rare, it is possible that the gamma brainwave could cause a couple problems. Problems associated with too many gamma brainwaves:

Some anxiety Though gamma brainwaves are usually not correlated with stress and anxiety, they can be. When a person mostly displays high amounts of beta brainwaves in combination with gamma on an E.E.G., the individual probably has very high levels of anxiety. Though gamma brainwaves usually decrease when we are under stress, the dopamine released from gamma brainwaves can actually cause us to feel overanxious, nervous, or tense. It is best not to increase both gamma and beta brainwaves at the same time. Depending on your current brainwave state, it is important to recognized that though you are usually safe with increasing gamma, overdoing training time or frequency of training may make you feel unpleasantly anxious. Clear, conscious perception of reality Some people are not prepared for the mental awakening that is associated with gamma brainwaves. If you are currently living a fairly unfocused life and happen to begin entraining the gamma brainwave, it may feel like a huge jolt to your consciousness. If I was extremely unfocused, Id definitely work on entraining the gamma brainwave, but Id do it slowly and in moderation. Too much gamma entrainment will actually give you a headache! It is important to not become disturbed by your brains initial reactions to an increased gamma brainwave and perception of reality.

Healthy ways to increase gamma brainwaves:

Brainwave entrainment - As I mention a lot, brainwave entrainment is great for fine tuning your state of consciousness and awareness. If you want to easily and naturally experience gamma brainwaves, I highly recommend trying any of the programs in my recommended products section. Brainwave entrainment is an easy process that

involves simply listening to a tone (stimulus) and your brainwaves will automatically, naturally shift in order to match the desired frequency associated with the acoustic tone. If you have Neuro-Programmer 2 or Mind Stereo, I recommend creating a customized gamma brainwave session at 40 Hz. Why 40 Hz? 40 Hz is the brainwave of choice and has been linked to the most powerful, positive effects which are currently associated with the gamma brainwave.

Getting a good nights sleep Gamma brainwave activity is present in Rapid-Eye Movement (R.E.M.) Sleep and is sometimes associated with dreaming. Getting a good nights sleep is important for staying healthy, keeping a healthy, powerful brain. Gamma brainwaves also increase the moment we awaken. Though we are in the theta brainwave for most R.E.M. sleep, the gamma brainwave is present along with the theta. Most non-dream, deep sleep is linked to an increase in delta brainwave activity, whereas dream-sleep is mostly linked to gamma and theta brainwave activity. Meditation The goal of most types of meditation is to lower the brainwaves into the alpha-theta brainwave range. With that said, as you learn to become more aware and increase awareness of your brainwave state, your gamma brainwave activity will naturally increase. A very safe, healthy way to attempt to increase your gamma brainwaves is to make the act of meditation a daily habit or start up a meditation routine. If you are already meditating, great youll naturally increase your awareness. As you increase your awareness, your gamma brainwave will increase. Hypnosis / Self-hypnosis The goal of all hypnosis and self-hypnosis programs is to target the lower brainwave ranges (i.e. alpha and theta) and implant new beliefs. Though you are slowing your brainwaves, your concentration levels are skyrocketing as well. Only having large amounts of alpha and theta without gamma would make self-hypnosis very difficult and an ineffective practice. The more often you participate in self-hypnosis, the more your gamma brainwave amplitude will increase. Yoga Like meditation, yoga is yet another activity that promotes relaxation and wellbeing by shifting your brainwaves and increasing your perception of reality. Brainwaves of yogis have shown that they are able to increase their gamma brainwaves to higher than average amounts. Though there are many different types of yoga, if they are practiced correctly, they can be utilized to increase awareness and gain valuable insight from within.

Unhealthy ways to increase gamma brainwaves?


There are no known unhealthy activities that increase gamma brainwaves Virtually all activities that are detrimental to mental health, decrease the amount of gamma brainwave activity in the brain Things like general anesthesia, stress, and killing brain cells will decrease your brains natural production and amount of gamma brainwave activity. As gamma brainwave activity decreases, susceptibility to depression, stress, and unfocused or impulsive thinking may overtake the brain.

Final evaluation of gamma brainwaves:

I personally think that gamma brainwaves are very invigorating, reality enhancing, and great for everyone to experience. The advanced focus, learning ability, and perception of consciousness is something that everyone should experience. The gamma brainwave is a natural antidepressant and experiencing the power of 40 Hz feels awesome. However, you should be the judge as to whether or not increasing gamma is the brainwave you want to experiment with. If you are already experiencing many of the listed benefits, your gamma brainwave could be within its healthiest range. Are you already a very smart, compassionate person? If so, chances are high that your brain could be naturally producing a fair amount of gamma brainwave activity. However, most people do not experience large amounts of the gamma brainwave unless they are in a state of compassion meditation. If you think you are one of the rare people that actually have too much of a conscious perception of reality and are slightly anxious, you may not want to increase your gamma brainwaves. I highly recommend entraining the 40 Hz gamma brainwave to see how you react. Most people dont produce large amounts of gamma naturally and most people can benefit. There is not much documented evidence that the gamma brainwave can even be entrained, but most people that Ive talked to claim that it does have an effect. I have also given the 40 Hz gamma brainwave a shot and have found it to be extremely effective for increasing my focus. If you purchase Neuro-Programmer 2 Professional or any of my recommended brainwave products, you can create a customized 40 Hz gamma brainwave session or use a specialized one thats already built into the library. The built-in gamma brainwave sessions are supposed to be great. I usually stick to creating my own gamma sessions with custom frequencies. If you are confused about your brainwave pattern, consider giving Neuro-Programmer 2 Professional some experimentation and seeing how your brain reacts and your reality is shifted. If youd like to experience Gamma brainwaves: If you have ever given gamma brainwaves a shot, Id really like to hear about your experience in the comments section. If you are interested in experiencing some gamma brainwaves or have any questions for me, feel free to send me a message through my contact form. I appreciate when you buy products through my referral ads to help me pay for blog hosting services and the promotion of this blog! Increasing your gamma brainwave pattern is definitely an experience that has potential to boost your brain power and take your brain to a higher level of functioning.

RESEARCH

The project involves basic research needed to make possible a brain-computer interface for decoding thought and communicating it to an intended target. Applications are to situations in which it is either impossible or inappropriate to communicate using visual means or by audible speech; the long-term aim is to provide a significant advance in Army communication capabilities in such situations. Non-invasive brain-imaging technologies like electroencephalography (EEG) offer a potential way for dispersed team members to communicate their thoughts. A Soldier thinks a message to be transmitted. A system for automatic imagined speech recognition decodes EEG recordings of brain activity during the thought message. A second system infers simultaneously the intended target of the communication from EEG signals. Message and target information are then combined to communicate the message as intended.

Overview EEG and brain-computer interfaces Imagined speech production Intended direction Potential applications References Publications

Overview
In 1967, Dewan published a paper in Nature in which was first described a method for communicating linguistic information by brain waves measured using EEG. He trained himself and several others to modulate their brains' alpha rhythms: to turn these rhythms on and off at will. Alpha rhythms reflect brain neuron activity, at or about a frequency of 10Hz, concerning not only whether the eyes are open but also one's state of attention. Mental activity and attention abolish these rhythms, which are normally present in a state of mental relaxation. Dewan was able to signal letters of the alphabet using Morse code by voluntarily turning these rhythms on and off, with eyes closed. Signalling such letters, one by one, provides the words and phrases that the communicator has in mind. In 1988, Farwell and Donchin described a second method for transmitting linguistic information. This method is based on the P300 response, again measured using EEG. The P300 is evoked when a person is presented a stimulus that matches what it is they are looking for: a target. Farwell and Donchin display to the thinker the letters of the alphabet, one by one, and eventually display the letter that he or she has in mind. The P300 potential would be evoked, for that target letter, so signalling the thinker's desire to communicate that letter. Again, thinkers can communicate words by signalling the word's letters one by one. Can one use brain waves that are more directly linked to speech production to communicate linguistic information? Speech is a natural method for communicating linguistic information. Were one able to use EEG to measure directly the activity of brain speech networks, one

could potentially develop an easier and faster method for communicating linguistic information using EEG. Our work on covert speech production pursues this idea. Covert speech is the technical term used to refer to the words one hears in one's head while thinking: imagined speech. Can we use EEG to measure brain activity during covert speech production in a way that lets one communicate linguistic information in a natural and rapid way? The work aims also to determine, from brain waves, where the linguistic information should be sent: sent in a particular direction, sent to a particular person, etc. The question is not so much how the message should be sent (e.g., wireless text messaging) but where or to whom. Work on the relationship between alpha rhythms and attention has, since Dewan's time, revealed that the pattern of alpha rhythm activity in the two hemispheres of the brain provides information on where a person is focusing attention. For example, paying attention to an area in the left half of one's visual field causes the alpha rhythm activity in the right hemisphere of the brain to desynchronize (and so diminish in intensity), and vice versa. These shifts in brain activity are thought to be helpful in directing more sensory and cognitive resources to the area being attended (e.g., Corbetta and Shulman, 2002). EEG can be used to measure patterns of alpha rhythms (Worden et al., 2000; Sauseng et al., 2005), to measure electric potentials that are evoked in response to a shift in attention (e.g., Harter et al., 1989; Corbetta et al., 1993), and to measure the change in amplitude of steady-state responses that are evoked by a shift in attention among frequency-tagged visual stimuli (e.g., Srinivasan et al., 2006). We are studying alpha rhythms, evoked responses and steady-state evoked potentials measured using EEG to help develop a brain-computer interface that helps the thinker communicate to where or to whom a message should be sent. Finally, we aim to learn more about activity in brain networks when two or more tasks are carried out simultaneously. Many studies in cognitive neuroscience involve brain-imaging measurements taken during the performance of a single task (e.g., visual detection, language processing, decision-making). Covert speech production and direction intention are likely to use differing brain resources. Can these differences be used by a brain-computer interface to infer both a communicator's message and the recipient?

EEG and brain-computer interfaces


A tremendous amount of scentific and engineering progress has been made over the past several decades in developing brain-computer interfaces based on EEG measurements of brain network activity. One indicator of this progress is that there are now (at least) three companies which are developing EEG-based technologies for use with computer and console games: Emotiv Systems, OCZ Technology, and Neurosky. The idea is that an EEG headset, worn by the player, provides signals concerning what action the player wants to take in the game, whether the player is paying attention to the game, etc. Game software which is responsive to information provided by the EEG device guides and modifies gameplay. Research on brain-computer interfaces has historically been motivated more by biomedical applications. For example, people who have suffered strokes or injuries to their brain, as well as those suffering from certain diseases like Lou Gehrig's disease (ALS), may have impaired movement for part or all of their bodies while preserving a good deal of normal brain function. Can a person signal how they would like to move their arm or move a cursor on a computer screen under such circumstances? Much work has led to success here. For example, researchers at Pitt and CMU showed recently that a monkey can control movement of a prosthetic arm using brain waves measured using implanted electrodes. Electrocorticographic

(ECoG) measurements in humans show parallel promise for motor control through a braincomputer interface. Our work uses the non-invasive technologies EEG, MEG (magnetoencephalography measurement of magnetic field fluctuations at the scalp caused by brain activity) and fMRI (functional magnetic resonance imaging). EEG measures electric field fluctuations at the surface of the scalp caused by brain activity. While it has the advantages of portability, relatively low cost and a high temporal resolution--ability to track rapid events in the brain, it has several disadvantages (Nunez and Srinivasan, 2006). First, its spatial resolution is limited to about two centimeters; electric potential changes in the brain spread diffusely as they move towards the scalp surface where measurements are made. Second, EEG is also sensitive to electric field potential changes caused by muscle. Movements of the eyes, movements of muscles beneath the scalp, etc., create large electric potential changes that can swamp signals from neurons in the brain. MEG has spatial resolution similar to EEG. MEG has the further disadvantage that it relies on very expensive equipment that can only be used in a room which which is completely shielded from external sources of electromagnetic radiation. An advantage of MEG over EEG is that it is better able to measure activity in brain cortical areas which are oriented perpendicularly to the scalp's surface: brain cortex that lies in the folds. fMRI, finally, has very good spatial resolution (on the order of 1 millimeter) but has a poor temporal resolution. The blood oxygenation level signal relied on in fMRI measurement is sluggish. fMRI is thus useful for localizing brain activity in space but is limited in determining when that activity occurs. While our focus is on an EEG-based brain computer interface, we use MEG and fMRI to acquire further information about underlying brain activity which is not available in EEG signals.

Imagined speech production


One of the simplest possible ways to test whether EEG provides information concerning thought expressed through imagined (covert) speech is as follows. A person who wears an EEG headset is shown either the letter "y" or the letter "n" very briefly. A second or two later, the person thinks to him or herself the word "yes" or the word "no", depending on whether the displayed letter was "y" or "n", respectively. Do this many times while recording the EEG signals. One way to analyze the EEG data from such an experiment is to attempt to classify the data. The aim is to use the EEG signal information alone to distinguish those times when the person was thinking "yes" from those times when the person was thinking "no". If the EEG signals provide enough information to classify accurately the "yes" and the "no" thoughts, then one has made good progress. However, one cannot rest satisfied with such a result. For example, it may be the case that one can tell apart "yes" from "no" using EEG because the EEG signals in response to the displayed letters "y" and "n" differ. This would mean that the visual responses to the letters used to cue the thinker, rather than the covertly spoken words, lead to discernible differences in the EEG recordings. One wonders more generally how a classification result depends on the prompt: for example, a seen "y" vs. a heard "y". It could be also the case that a particular person, while remaining silent, just happens to move his or her vocal tract muscles when thinking "yes" but not when thinking "no". This would mean that the EEG-based differences between thinking "yes" and "no" depend on the degree of motor response. Indeed, there are many ways in which a straightforward interpretation of classifiability can prove false, and a major aim of our work is to conduct experiments that pin down more precisely what brain networks are contributing to EEG classification results.

Theta-band a the rhythm o production. T plots frequen brain waves locations; th (in the 4-7Hz is weak when the syllable / steady rate. shows that s activity is eli /ba/ in a rhyt three /ba/ pr one at left. T difference in is localized t atop central pilot studies EEG provide helps to clas spoken sente syllables, me

A more complex type of experiment involves training. Suppose that thinking different words leads to small differences in EEG signals. One can try to help a person generate stronger, more informative EEG signals by providing feedback during an experiment. The person uses feedback information concerning the strength of EEG-based information to produce stronger signals. Such experiments typically follow earlier classification experiments. The reason is that classification experiments provide needed information on what aspects of the EEG recordings provide information that lets one differentiate what is spoken covertly. The strongest published results for classification using EEG concern speech that one listens to rather than speech that one produces covertly. In the late 1990s, a group at Stanford succeeded in classifying EEG signals recorded while listening to small sets of words or sentences, with mixed success (Suppes et al., 1997, 1998). MEG has also been used to classify heard speech. Numminen and Curio (1999) used MEG to study auditory cortical responses to speech and showed activity related to speech monitoring in both overt and covert production (see also Houde et al., 2002). Ahissar and colleagues (2001) recorded MEG data from natural spoken sentences presented at different time-compression ratios. Their principal component analysis of the MEG data showed that one component correlated well with envelope information and its degradation consequent to compression. Luo and Poeppel (2007) showed that, in single trials recorded when listening to natural spoken sentences, the phase of the theta band (3-8Hz) recorded from auditory cortex plays a critical role in speech representation. MEG theta band responses relate to the syllabic structure of sentence materials, which in turn affects response temporal envelope.

Classification of heard sentences using MEG (after Luo and Poeppel, 2007). A. Skull distribution of thetaband phase representation of syllabic structure. (B) Perfect classification of a small set of sentences based on MEG thetaband responses (central diagonal). Strongest competing sentences in the classification are very similar to the correct sentence.

A neurolinguistic framework for speech production is needed to understand and pursue such results. Studies of cortical speech mechanisms suggest that, within temporal and frontal lobe cortices, there is a direct speech production pipeline that ranges from earlier, conceptuallydriven word selection through later selection of corresponding articulatory motor commands (Hickok and Poeppel, 2004, 2007; Indefrey and Levelt, 2004).
Brain areas involved in speech processing (after Hickok and Poeppel, 2004). A ventral processing stream (the what pathway) maps sound and meaning. Activity in the superior temporal sulcus and the posterior inferior temporal lobe (pITL) interfaces sound-based speech representations in the superior temporal gyrus and widelydistributed conceptual representations. A dorsal processing stream (the where, how or do pathway) involves activity in cortex at the junction of parietal and temporal lobes in the sylvian fissure (Spt), which projects to frontal cortical areas that generate articulatory codes: posterior inferior frontal and dorsal premotor sites (pIF/dPM).

Much evidence concerning the speech production pipeline comes from studies using EEG techniques, which have the temporal resolution needed to discern staged processing. These studies use evoked response potentials like the N200, a go/no-go signal with magnitude a function of the neural activity required for response inhibition, to measure times at which various stimuli interfere with speech production (e.g., Schiller et al., 2003). MEG has also contributed; Salmelin and colleagues (1994) used MEG to trace the time-course of cortical activation during both overt and covert picture naming. Results suggest that syllables are basic representations in cortical speech production, and that they are generated serially from representations of syntactically-marked words and used to retrieve gestural information that drives motor articulation: concepts to words to syllables to phonemes to motor articulation. The Levelt model of speech production relates this linguistic pipeline to cortical activity (Indefrey and Levelt, 2004) localized in fMRI and PET studies using a variety of overt and covert speech production tasks. The model localizes (1) lexical selection from conceptual processes to left medial temporal gyrus and environs; (2) retrieval of a word's phonological code some 70 msec later to (left) Wernicke's area; (3) sequential syllabification of a word some 100 msec later to (left) Broca's area, and (4) phonetic encoding of the syllables some 150 msec later to left inferior lateral frontal cortex and to right supplementary motor area. Can EEG be used to make sense of the rumbling of this pipeline? Our intent is to learn what linguistic information can be extracted from EEG recordings of this direct speech production pipeline, when one thinks to oneself. We are particularly interested in the involvement of brain networks which help with speech motor articulation and with brain networks involved in generating the auditory images which accompany covert speech: the words heard in one's head while thinking. Our expectation is that decoding the EEG recordings of a covert speech stream successfully will rely on context in a way similar to that found when performing standard automatic speech recognition (ASR). A particular element of speech that is signaled through a spoken speech waveform, be it a phoneme, syllable or word, is more reliably identified when taken in the context of preceding speech elements. We will work to adapt standard ASR to the decoding of EEG signals concerning covert speech streams.

Intended Direction
The project aims also to discern from EEG recordings an intended direction that may be signaled by a thinker to select a target of communication. Two components are of special interest: EEG signals concerning overt orienting movements, like those of the eyes, and signals concerning shifts of attention. Saccadic eye movements are overt indicators of attentional orientation that depend on a generator network spanning cortical frontal eye fields and subcortical neurons in substantial nigra, superior colliculus and the brainstem (Boucher et al., 2007). Shifts in attention may occur covertly and are thought to result from the activity of attentional circuits in frontal and parietal lobes (Corbetta and Shulman, 2002). These are thought to feed back onto visual areas in occipital cortex (Praamstra et al., 2005); such feedback is thought to promote the facilitation of sensory processing from the intended direction. Shift in gaze is closely related to shift of attention. A premotor theory of attention (Rizzolatti et al., 1994) suggests that the allocation of spatial attention involves planning for but not executing a saccade. Yet it is possible to shift attention without shifting gaze (Hoffman and

Subramanian, 1995), and some evidence suggests that spatial attention shifts may occur in the absence of saccade preparation (Juan et al., 2004). Event-related potentials measured using EEG and which result from a shift of attention are threefold. They include EDAN, an early posterior negativity in the hemisphere contralateral to the attended hemifield, thought related to the spatial information provided by a cue in a covert orienting task (Harter et al., 1989); LDAP, a later contralateral positivity thought related to facilitation of sensory areas (Harter et al., 1989) and ADAN, an enhanced negativity at frontal contralateral electrodes likely linked to activation of frontal lobe neurons involved in the control of spatial shifts (ADAN, Corbetta et al., 1993; Nobre et al., 2000). These ERPs are supramodal, in that they occur independently of the sensory modality used to modulate attention (Eimer et al., 2002). Alpha-band activity in frontal, parietal and occipital cortex, recorded by EEG in the 8-14 Hz range, provides further information concerning visuospatial attention that may very possibly be recovered reliably from single trials. Alpha-band amplitudes are suppressed in parietooccipital cortex contralateral to the covertly-attended hemifield and enhanced in cortex contralateral to the to-be-ignored hemifield (Worden et al., 2000; Sauseng et al., 2005; Thut et al., 2006; Capotosto et al., 2008). Synchronization in the form of alpha-band phasecoupling increases between frontal and parieto-occipital alpha activity in the hemisphere contralateral to the attended region, which suggests that the posterior modulation of alpha activity in contralateral posterior parieto-occipital cortex is controlled by prefrontal regions (Sauseng et al., 2005). Attentional shifts modulate steady-state visual evoked potentials (SSVEPs). These potentials can be studied by amplitude-modulating visual stimuli at particular frequencies (e.g., Srinivasan et al., 2006). One extracts the response to such a frequency-tagged visual stimulus by examining energy in EEG records at the frequency of tagging (Ding et al., 2006). Attending to a stimulus location modulates the SSVEP through energy increase, even at locations attended covertly (Morgan et al., 1996; Kelly et al., 2005). Spatial shifts of attention that depend on auditory stimulation also give rise to event-related potentials (e.g., Teder-Salejarvi et al., 1999), modulation of alpha-band activity, and modulation of steady-state evoked potentials, which suggest that such shifts are spatial, not merely visual. Finally, shifts in attention are thought to occur not only among spatial locations but among object features like color (Muller et al., 2006) and among objects themselves. We hypothesize that EEG recordings related to spatial attention in single trials can be used in four basic ways to provide information concerning intended direction.

One can discern the hemifield to which attention is lateralized through analysis of attentional modulation of alpha-band activity and of steadystate visual and auditory evoked potentials; One can discern which of several frequency-tagged objects attention is directed towards through analysis of attentional modulation of steadystate visual and auditory evoked potentials One can estimate a continuous-valued intended direction by considering information concerning shift in gaze captured through EEG and through eye-tracking, in addition to attentional modulation of alpha-band activity and of steady-state evoked potentials, and

By using gaze direction and steady-state evoked potential information one can most reliably discern the intended target of communication, as these provide information concerning both attended direction and attended object.

Potential Applications
The funded research is basic in nature. A functioning brain-computer interface for communicating thought and the intended recipient like that described above is years away. Yet one can identify several areas of future application. These include the development of a silent communications system for dispersed ground forces, of a speech-based means of communication for locked-in individuals, and of commercial communications devices based on brain-wave decoding.

References

Ahissar, E., Nagarajan, S., Ahissar, M., Protopapas, A., Mahncke, H. & Merzenich, M.M. (2001). Speech comprehension is correlated with temporal response patterns recorded from auditory cortex. Proceedings of the National Academy of Sciences 98, 13367-13372. Boucher, L., Palmeri, T.J., Logan, G.D. & Schall, J.D. (2007). Inhibitory control in mind and brain: an interactive race model of countermanding saccades. Psychological Review 114, 376-397. Capotosto, P., Babiloni, C., Romani, G.L. & Corbetta, M. (2008). Posterior parietal cortex controls spatial attention through modulation of anticipatory alpha rhythms. Nature Precedings Feb. 1. Corbetta, M., Miezin, F.M., Shulman, G.L. & Petersen, S.E. (1993). A PET study of visuospatial attention. Journal of Neuroscience 13, 1202-1226. Corbetta, M. & Shulman, G.L. (2002). Control of goal-directed and stimulus-driven attention in the brain. Nature Reviews (Neuroscience) 3, 201-215. Dewan, Edmond M. (1967). Occipital alpha rhythm eye position and lens accomodation. Nature 214, 975-977. Ding, J., Sperling, G. & Srinivasan, R. (2006). Attentional modulation of SSVEP power depends on the network tagged by the flicker frequency. Cerebral Cortex 17, 1016-29. Eimer, M., van Velzen, J. & Driver, J. (2002). Cross-modal interactions between audition, touch, and vision in endogenous spatial attention: ERP evidence on preparatory states and sensory modulations. Journal of Cognitive Neuroscience 14, 254-271. Farwell, L.A. & Donchin, D. (1988). Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. Electroencephalography and Clinical Neurophysiology 70, 510-523.

Harter, M.R., Miller, S.L., Price, N.J., LaLonde, M.E. & Keyes, A.L. (1989). Neural processes involved in directing attention. Journal of Cognitive Neuroscience 1, 223-237. Hickok, G. & Poeppel, D. (2004). Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language. Cognition 92, 67-99. Hickok, G. & Poeppel, D. (2007). The cortical organization of speech processing. Nature Reviews Neuroscience 8, 393-402. Hoffman, J.E. & Subramanian, B. (1995). The role of visual attention in saccadic eye movements. Perception & Psychophysics 57, 787-795. Houde, J.F., Nagarajan, S.S., Sekihara, K. & Merzenich, M.M. (2002). Modulation of the auditory cortex during speech: an MEG study. Journal of Cognitive Neuroscience 15, 112538. Indefrey, P. & Levelt, W.J.M. (2004). The spatial and temporal signatures of word production components. Cognition 92, 101-144. Juan, C.-H., Shorter-Jacobi, S.M. & Schall, J.D. (2004). Dissociation of spatial attention and saccade preparation. Proceedings of the National Academy of Sciences 101, 15541-15544. Kelly, S.P., Lalor, E.C., Reilly, R.B. & Foxe, J.J. (2005). Visual spatial attention tracking using high-density SSVEP data for independent brain-computer communication. IEEE Transactions on Neural Systems and Rehabilitation Engineering 13, 172-178. Luo, H. & Poeppel, D. (2007). Phase patterns of neuronal responses reliably discriminate speech in human auditory cortex. Neuron 54, 1001-1010. Morgan, S.T., Hansen, J.C. & Hillyard, S.A. (1996). Selective attention to stimulus location modulates the steady state visual evoked potential. Proceedings of the National Academy of Sciences 93, 4770-4774. Muller, M.M., Andersen, S., Trujillo, N.J., Valdes-Sosa, P., Malinowski, P. & Hillyard, S.A. (2006). Feature-selective attention enhancaes color signals in early visual areas of the human brain. Proceedings of the National Academy of Sciences 103, 14250-14254. Nobre, A.C., Sebestyen, G.N. & Miniussi, C. (2000). The dynamics of shifting visuospatial attention revealed by event-related potentials. Neuropsychologia 38, 964-974. Numminen, J, & Curio, G. (1999). Differential effects of overt, covert and replayed speech on vowel-evoked responses of the human auditory cortex. Neuroscience Letters 272(1), 2932. Nunez, P.L. & Srinivasan, R. (2006). Electric fields of the brain: the neurophysics of EEG. 2nd Ed. (New York: Oxford University Press, 2006). Praamstra, P., Boutsen, L. & Humphreys, G.W. (2005). Frontoparietal control of spatial attention and motor intention in human EEG. Journal of Neurophysiology 94, 764-774.

Rizzolatti, G., Riggio, L, & Sheliga, B.M. (1994). Space and selective attention. In Umilta, C. & Moscovitch, M. (Eds.), Attention and Performance XV (Cambridge, MIT), pp. 231-265. Salmelin, R., Hari, R., Lounasmaa, O.V. & Sams, M. (1994). Dynamics of brain activation during picture naming. Nature 368, 463-5. Sauseng, P., Klimesch, W., Stadler, W., Schabus, M., Doppelmayr, M., Hanslmayr, S., Gruber, W.R. & Birbaumer, N. (2005). A shift of visual spatial attention is selectively associated with human EEG alpha activity. European Journal of Neuroscience 22, 29172926. Schiller, N.O., Bles, M., Jansma, B.M. (2003). Tracking the time course of phonological encoding in speech production: an event-related brain potential study. Cognitive Brain Research 17, 819-831. Srinivasan, R., Bibi, F.A. & Nunez, P.L. (2006). Steady-state visual evoked potentials: Distributed local sources and wave-like dynamics that are sensitive to flicker frequency. Brain Topography 18, 167-187. Suppes, P., Lu, Z.-L. & Han, B. (1997). Brain wave recognition of words. Proceedings of the National Academy of Science USA 94, 14965-14969. Suppes, P., Han, B. & Lu, Z.-L. (1998). Brain wave recognition of sentences. Proceedings of the National Academy of Science USA 95, 15861-15866. Teder-Salejarvi, W.A., Hillyard, S.A., Roder, B. & Neville, H.J. (1999). Spatial attention to central and peripheral auditory stimuli as indexed by event-related potentials. Cognitive Brain Research 8, 213-227. Thut, G., Nietzel, A., Brandt, S.A. & Pascual-Leone, A. (2006). Alpha-band electroencephalographic activity over occipital cortex indexes visuospatial attention bias and predicts visual target detection. Journal of Neuroscience 26, 9494-9502. Worden, M.S., Foxe, J.J., Wang, N. & Simpson, G.V. (2000). Anticipatory biasing of visuospatial attention indexed by retinotopically specific alpha-band electroencephalography increases over occipital cortex. Journal of Neuroscience 20, 1-6.

Publications

D'Zmura, M., Deng, S., Lappas, T., Thorpe, S. & Srinivasan, R. (in press). Toward EEG sensing of imagined speech. Jacko, J.A. (Ed.), Human-Computer Interaction, Part I, HCII 2009, LNCS 5610 (Berlin: Springer) 40-48[PDF] Srinivasan, R., Thorpe, S., Deng, S., Lappas, T. & D'Zmura, M. (in press). Decoding attentional orientation from EEG spectra.Jacko, J.A. (Ed.), Human-Computer Interaction, Part I, HCII 2009, LNCS 5610 (Berlin: Springer). [PDF]

También podría gustarte