top of page

How Did We Get to This Point?

When you think of the past, what do you think of? Black and white movies? Big, chunky computers? Typewriters? Well, little did you know, neurotechnology, although it seems very advanced today, stemmed from the early half of the 1900's. But specifically, when and where did neurotechnology start? What were some of the biggest landmarks in the past that allowed for us to make the sort of progress we are making today? Let’s take a look at a brief history of neurotechnology and brain-computer interface (BCI), and its evolution throughout the years. If you want to learn more about BCI and how it works, be sure to check out Manya’s (who is also a member of the BCI team) posts on the blog page!


1929: The Creation of EEG

Hans Berger, a German psychiatrist, first discovered electroencephalography, or EEG, in 1924 and had refined it to its working precision by 1929. It was able to serve as a method to diagnose and study neurological and psychiatric disorders. What was his inspiration to create EEG you might ask? Well, it’s a bit of a strange story. He became interested in studying the brain’s connection with the outside world and its “psychic” abilities after falling off from a horse. Now, that event in specific is not what exactly inspired him: after falling off his horse, he got a telegram from his father who asked if he was okay, telling him that his sister had gotten a weird feeling and told him to check up on Berger. Thus, his sister’s gut feeling caused Berger to become extremely interested in how the brain manifested these weird intuitions. Obviously, this is a huge task that even researchers today are not completely sure about, but Berger eventually narrowed down his focus to studying the brain’s electrical activity. Even though he was producing EEG’s that were understandable, his actual methods of measurement were kind of strange. He tried silver foil electrodes, which don’t seem too scary, but he also tried silver needles placed into the skulls of test subjects. And if that’s not concerning enough, often times he used the measurement needles on himself…. Nevertheless, Berger’s invention was crucial to the field of neurotechnology, since EEG’s now allow us to measure brain signals and use them as an input for BCI devices in order to produce tangible effects.

Image from:

1957: The Perceptron

“The Perceptron” may sound like a dramatic, scifi film but I swear it is way cooler and WAY more complicated than you think, and was created by Frank Rosenblatt at Cornell’s Aeronautical Laboratory in 1957. Explained simply, the perceptron was the first artificial neural network that exhibited how the brain processes and distinguishes between different visual stimuli. It was an algorithm that allowed for machine learning, which basically means that it allowed machines to perform classification. Classification is an important part of signal processing in technological devices and is basically when a computer uses algorithms, such as those detailed by the perceptron, to categorize and classify data and recognize patterns. Although the perceptron had its limitations, it served as the foundation for artificial intelligence and advanced technological devices, including BCI.

Image from:

1998: Phil Kennedy and The Origin of BCI

Phil Kennedy was a neurologist whose career changed drastically when he crossed paths with Johnny Ray, a Vietnam veteran who suffered from a brain-stem stroke. The brainstem is the main communication path between the brain and spinal cord, so if it is damaged, as in Johnny’s case, you could lose the ability to breathe, touch, or move yet still be able to think and be aware. This condition is known as being “locked in” or in a vegetative state. Dr. Phil Kennedy was able to successfully restore this communication using invasive electrodes. This was the first instance of brain-computer interface. The procedure to implant the electrodes took 12 hours, and it placed the electrodes, which were encased in two glass cones, into the cortical area of the brain controlling movement of the left hand. The hope was that Johnny could imagine moving his left hand, causing neurons in the area to fire with greater frequency, and the electrical impulse passing through the axons could be intercepted and directed through the electrodes to a receiver that would translate the signal for Johnny’s computer. From there, the signals could be used to move a cursor on Johnny’s computer that could help him communicate and express his feelings.*

*FYI- Dr. Monti, who I mentioned in my last blog post, is doing work just like this, using BCI to help people in vegetative states, right here at UCLA!

2004: Matthew Nagle - Implanting the “BrainGate”

The “BrainGate” was another invasive BCI but it was more advanced than that developed by Dr. Kennedy, and it granted Matthew Nagle, BrainGate’s first human guinea pig, more mental control over his environment than Johnny Ray had. Matthew Nagle was stabbed and had been left completely paralyzed in all of his limbs. Dr. John Donoghue, the neuroscientist who developed BrainGate, implanted the device, which included 96 electrodes, into the motor cortex of Ray’s brain. The device was connected to external wires allowing him to move a prosthetic arm, open emails, and even play video games; rather than trying to fix the motor neurons in the brain, which communicated perfectly normally, Dr. Donoghue realized the problem lay in the unresponsiveness of the actual motor nerves, which was due to the disrupted pathway between the brain and the rest of the body. This accomplishment led to the race for researchers to build much more efficient BCI, ones that were more portable and practical for the patient, many being paralyzed at a younger age and thus needing a device they could easily use for a significant portion of their lives. BrainGate became a very popular device and opened up a new world for neurorehabilitation and medical applications of neurotechnology.

Image from:


Neurotechnology and brain-computer interface has obviously come a long way, from simply measuring brain signals to using those brain signals to alter one’s environment, and there is still so much more that the field can accomplish. But with that come potential ethical issues to consider. Just how much are we able to accomplish with neurotechnology? Where do we draw the line for what is considered beneficial neurotechnology versus what could be potentially dangerous neurotechnology? After all, neurotechnology allows us to go beyond our physical capabilities and do things that we once considered science fiction. It’s interesting to think of not just where we will go in the future with neurotechnology, but also how far we will go in expanding human capabilities.


Works Cited:

bottom of page