Progression Towards A Universal Communication Technology

0
1903

Introduction

All throughout the history of mankind, there has been an eternal human desire to communicate.  This thirst for improvement did not discriminate; it manifested itself in every human civilization. No matter what artificial divisions were imposed on people, this force bound everyone together in the pursuit of a common goal. The purpose of this paper is to identify what communication technology is truly universal. This technology would then achieve this common goal of all civilizations, which is overcoming existing communication and cultural barriers.

First, languages were developed to actually allow for communication among common people in a certain vicinity. This however was not adequate. Soldiers manning the Great Wall of China and certain Native American tribes were using smoke signals to communicate between guardpost or tribe. Following the natural line of technological progression, courier and mail systems sprung up to allow people to easily exchange letters. Once again, waiting weeks just to pass a message along was so deplorable a method that Samuel Morse experimented, and after much experimentation, he invented the telegraph (History.com, 2009). This is all great, but to rehash a common theme, it still wasn’t good enough.

The next rung on this ladder was built by a poor Florentine, Antonio Meucci. What was remarkable about him was that he was the sole inventor of the telephone (It took decades of legal struggles to finally recognize him as the legitimate claimant to the title). The telephone sufficed people’s needs for many future decades. But at a time of heightened Cold War distress, the US government recognized the vulnerability of the current telecommunications network. As a result, DARPA, the Defense Advanced Research Projects Agency, was blessed with a new wave of funding (Isaacson, 2014). The fruits of this labor resulted in the creation of the groundwork for the Internet. It didn’t take long for the modern Internet to blossom after the groundwork was laid. This is where the world stands as of now in terms of communications. Sure, people can communicate with others via their voice. But can anyone really do anything more? This actually incredible state of affairs is still a far cry from Ali Baba’s voice activated actions, which enamored the world in Ali Baba and the Forty Thieves. As it should happen in the classic tale, poor woodcutter Ali Baba discovered a cave that 40 thieves utilized as a vault for their stolen goods. What’s special about this cave is that it has a door that can be opened just by the utterance of the phrase, “open sesame” (In the end, human greed causes multiple deaths, but that’s not the point). Ever since it was dreamed up, the suppressed urge of many has been to issue commands to objects and have the objects follow the commands. This story demonstrates that even centuries ago, voice interaction with objects was being dreamed of, something that hasn’t even been fully accomplished today.

To a degree, this has been achieved. Apple’s Siri can send a text, give directions, or order a pizza. Amazon’s Echo can do the exact same thing, only on a different platform.  The near future will be filled with some simple improvements to existing technologies that are bound to vastly change the game.

How Speech Recognition Technology Works

People cannot just be satisfied with using something new without understanding why it does what it does. The transistor and other revolutionary inventions were created by people who took things apart; people who were curious. They were made by the technological adventurers. That is why, before this article further grows, the technological background shall be examined. The specific technological details aren’t necessary at all; they will be omitted. Only the guiding principles will be highlighted.

First, the technology, let’s just call it FD for future device, must be able to pick up sound of any kind. Well what is sound? In layman’s terms, sound is “vibrations in the air”. Every possible sound has its own special wave which identifies it as a “suh” or a “teh” sound (DOSITS, 2016). FD then uses its built in dynamic microphone to pick up the sound. In one, there is a magnet nestled in a coil of wire. When it moves back and forth in the coil it generates an electric signal. This is all so small that when the sound waves reach it, they disturb the magnet and move it back and forth, thus generating a signal. The miniscule differences in the waves turn into a slightly different magnet movement scenario which then turns into a slightly different electric signal, which can then be analyzed as a specific sound (MediaCollege.com, 2017). This isn’t anything revolutionary. This technology was literally invented in 1876, a time when Egypt was under Ottoman, not British suzerainty, the exact year when Otto’s internal combustion engine patent was filed and Twain’s The Adventures of Tom Sawyer was published. Okay, so sounds can be picked up, but they can’t be analyzed yet. Fast forward to 1971, almost a century later; a lab funded by the US government develops a speech recognition algorithm, but it was still crude. Over the years some of the kinks were worked out by Apple to become the novelty that is Siri. But how does Siri work? Chris Woodford summed it up in his well sourced article. According to him, “Broadly speaking, there are four different approaches a computer can take if it wants to turn spoken sounds into written words (Woodford, 2015):

  • Simple pattern matching (where each spoken word is recognized in its     entirety—the way you instantly recognize a tree or a table without consciously analyzing what you’re looking at)
  • Pattern and feature analysis (where each word is broken into bits and recognized from key features, such as the vowels it contains)
  • Language modeling and statistical analysis (in which a knowledge of grammar and the probability of certain words or sounds following on from one another is used to speed up recognition and improve accuracy)
  • Artificial neural networks (brain-like computer models that can reliably recognize patterns, such as word sounds, after exhaustive training)”.

So now that it’s known how this exciting technology works, anyone with a curious mind must inevitably be wondering where the state of this exciting technology is today.

Ongoing Developments

Disrupting a common theme, the newest improvements to this technology aren’t being made in a secret American lab; they are a product of Beijing’s Baidu. For those who don’t know, Baidu is the main search engine used in China. While still advanced, considering the potential, Siri is the typewriter of this field. It seriously lags behind what it could easily become. Baidu not only has more accurate recognition technology, which works in loud and crowded environments like Beijing’s busy streets, surpasses the abilities of humans, and it works in English and Mandarin (Knight, 2015). As the days turn to years, Baidu will work out the final shortcomings to create a 99% accurate voice recognition software. This will be able to listen to only your voice in a loud area, and won’t need a physical touch to get started.

Keep in mind that Earth has about 7 billion people right now, 2 billion of whom speak either Mandarin or English. There are still 5 billion outliers. While Baidu may soon expand to some of the other major languages of the world, it will be near-impossible to be able to recognize even 80% of current tongues. What could be done to include more people?

Artificial Intelligence

Enter artificial intelligence (AI). Far from the days when it was a Sci-Fi threat, it can be the most important part of the puzzle that is the worldwide adoption of voice-based technology. Tom Harris describes AI very accurately in his article.

Like the term “robot” itself, artificial intelligence is hard to define. Ultimate AI would be a recreation of the human thought process — a man-made machine with our intellectual abilities. This would include the ability to learn just about anything, the ability to reason, the ability to use language and the ability to formulate original ideas. Roboticists are nowhere near achieving this level of artificial intelligence, but they have made a lot of progress with more limited AI. Today’s AI machines can replicate some specific elements of intellectual ability” (Harris, 2016). To summarize, AI is our best attempt to not mimic, but recreate the human brain. But in doing so, it is hoped that it can become even smarter than a normal human brain.

Rather than hardwiring every single one of the world’s 7,106 languages, why not have FD learn them itself (Aulakh, 2013)? AI can do this for it. Basically, it would just need to learn a few of the major languages, then, to actually learn the derivatives or dialects, FD would just need to be used by speakers of that language. This method is exponentially faster and will be more accurate because no matter how close, translations can never do a language justice. After a few days or weeks, depending on the number of people who are speaking to FD, the languages will not only be learned, but the slang and the minutiae will be picked up. On a side note, this is also a linguist’s dream. There’s no other way thousands of languages can be documented and archived in such a short amount of time. Plus, if developed in time, FD can save dying languages. It is currently estimated that by 2100, one half of the 7,106 languages will disappear completely (Aulakh, 2013). Having people learn these languages may be a lost cause, but if Maori or any other dying language were to be completely recorded and archived, it would have the possibility to be picked up again.

Types of Artificial Intelligence

It should be noted that there are two types of AI: weak and strong. According to the Future Of Life Institute, “Weak AI.. is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car).” Strong AI, according to the same institute, “would outperform humans at nearly every cognitive task” (Tegmark, 2016). Weak is what we currently have, and it is what works behind the scenes in most new technological innovations of 2016. There is also no clear consensus on when strong AI will come into being. A poll taken at the 2015 Puerto Rico AI Conference has the world’s top AI scientists divided over when it will arrive. The median answer that sprouted up was 2045, but there were many holdouts, with some researchers believing that it was centuries down the road. So it may be a very long time until universal speech technology is possible (Tegmark, 2016).

How is this being done? Weak AI is coupled with Classical AI, which is just using AI to perform quantitative tasks. This AI performs simple tasks one at a time and it is already a facet of our life. It plays a role in weather prediction and analysis systems, and the unbeatable chess robots that grace the news every so often. IBM probably has created one of the more famous ones, Deep Blue. Compared to strong AI, weak or classical is much easier to create. All that is needed is a lot of processing power and some programming. Regarding strong AI, it is much harder to explain the processes behind it. This is because we don’t have these processes yet. But many governments and technology companies are racing to make a preliminary stride in the field. Remember, the human brain required 3.5 billion years of evolution to be created, so it is truly astonishing what these organizations are trying to do (aihorizon, 2017).

While this is phenomenal, something else couples with it. Right now, the big thing in real estate is no longer granite countertops; it’s smart and connected homes. In 2016, there are already a plethora of products on the market, like Google’s smart thermostat, Nest, LG’s connected appliance range, or Kwikset’s smart locks. The current foundation of these products is that a smartphone can activate their features, but why? If these two technological trends were to collide, the world would be a new place. Instead of using an app to set the temperature, why not your voice? Or you could just say, “open garage,” “unlock,” or Ali Baba’s “open sesame” to open a door.

The Sad Truth

If this were to be the future, then what a bleak technological pothole the future is. All these technologies have severe pitfalls. Not only is development being eschewed for sale-grabbing consumer features, but it is also painfully slow. Plus, it is all inherently insecure. A voice as a password or key is a horrible premise. More and more kids are using technology, and they have yet to go through puberty. You know, the same thing that changes kids’ voices. See a problem? If FD is trained to recognize little Billy’s voice, it won’t be able to recognize older Billy’s voice, which might be a whole octave deeper. To make it even harder for FD to spread, people have accents and dialects. If FD’s English is at first New York English, it may very well have trouble in the Deep South! The Sicilian dialect of Italian can be very different, and Arabic has a range of pocket dialects ranging from Levantine, to Hejazi, Gulf, and Lebanese Arabic among others. All these complexities will do nothing but bog down the progress of this technology. Many people sound the same, voices can be imitated, and the AI needed to include many people really quickly is still on the runway. So, what else is there?

A Truly Universal Communication Technology

The ideal solution is the only, truly egalitarian, all-inclusive, equal access technology. It is called a brain-computer interface, or BCI for short. Humans have actually known about it for millennia in the form of extrasensory perception or ESP. ESP is the ability to receive information directly in your mind.  While dismissed by many, it is a concept that was nevertheless toyed with throughout the years. As people have seen birds fly and then invented planes to do it themselves, people have known ESP for centuries, and only now is the technology ready to make it a reality for all. ESP is a direct human to human way of communicating thoughts, messages, and commands. Rather, this is a closed circuit in which:

  • The first human interacts with the machine
  • The machine interacts with the second human’s machine
  • The machine relays it to the second human, thus completing the circuit

The ALS Association defines it as “a system that allows a person to control a computer or other electronic device using only his or her brainwaves, with no movement required” (ALS Association, 2015).

To get the disbelief out of the way, this is not some future Sci-Fi premise; it will be available in our lifetimes. According to renowned neuroscientist Gerwin Schalk, “There is clearly an increasing interest in using brain signals for rehabilitation and other types of diagnosis” (Geva, 2009). Yes, the voice technology took about 150 years just to come to fruition, but BCIs have a trump card: Moore’s Law. In short, Moore’s Law is the prediction that every two years, the number of transistors in an integrated circuit will double. On the outside, it seems like it has more to do with processing power rather than BCIs. But, while not directly related, it illustrates a similar concept. Every year, the number of innovations to human life are exponentially increased. It took 180,000 years to just adopt agriculture, but following the exponential growth in the last two decades, humans have received the high-speed Internet, smartphones, and many other life-changing inventions. The common parable “two minds are greater than one” is more evident in the human experience now than ever before because there are more and more educated innovators now than at any other time. And now innovations are being made every year to a wide range of technological products at an unprecedented rate. While this was not envisioned in Ali Baba And the Forty Thieves, maybe the authors of the story weren’t big enough dreamers. This is the true purpose of this research paper: To explore the uses and future possibilities of the one and only truly universal communication technology. It is the era of the brain-computer interface (BCI).

Current Status of BCI Technology

First, in order to understand how BCIs works, it must be known how the brain works. The brain is made up of many miniscule parts called neurons. They are connected to other neurons via axons and dendrites. Axons are used to send information along, and dendrites are used to receive it. The actual information is transmitted in the form of electric signals. The axons are protected by a layer of myelin, which stifles the electricity. Like all things in life, nothing is perfect. The myelin can’t contain all of the signal, and some of it gets leaked out.

Next, an electrode needs to be implanted under the skull, directly in the gray matter. While this invasive system isn’t perfect and can have long term negative side-effects, it is currently the most accurate method. What is more commonly known and Hollywood friendly are the stereotypical electrodes with wires placed onto the head with the assistance of some gel. Not considering the obvious flaws of that method, the skull does a good job of blocking out the signal. This evolved into an implant which was connected to some wires. While toyed with, this idea was tossed for its immobility and infection risk (Grabianowski, 2007). Therefore, the wireless implant is the best solution.

Emily Singer expertly summarized the evolution of this vital piece of technology in her MIT Technology Review article. “Electroencephalography (EEG) is a decades-old method for measuring the brain’s electrical activity using a series of sensors placed on the scalp. In recent years, better sensor technologies and data-processing techniques, as well as more detailed knowledge of the brain, have dramatically improved the information that can be extracted from EEG. For example, scientists now use computationally intense signal processing and pattern-recognition techniques to predict where in the brain a particular signal measured on the surface of the scalp originated or how different parts of the brain are connected” (Singer, 2008).

If the reader is somehow not already excited, just wait. These are the first of their kind, as University of Michigan researcher Cindy Chestek elaborates, “Scientists have prototyped wireless brain-computer interfaces before, and some simpler transmitters have been sold for animal research. But there’s just no such thing as a device that has this many inputs and spits out megabits and megabits of data. It’s fundamentally a new kind of device (Regalgado, 2015).”

Once in there, the electrodes measure the escaping electric signals, amplify and filter them. So now they just have the signal. It must now be analyzed by a computer program. Currently the differences in electric strength are so minute, only increased brain activity about a certain thing can be detected. To illustrate this, imagine an entire deck of cards is displayed on a screen, with each card appearing after the other within one second intervals. Now, “pick a card, any card.” And your card is the ace of spades (Surprised right?). While every card other than yours is shown on the screen, the signal detected would be the same as the one detected at any other meaningless time. But, if you are thinking about the ace of spades, and it shows up on the screen, the brain activity, and consequently the electric signal would increase by enough of a margin that the signal can be detected. In the coming years, this field will be growing. There are already prototypes of computers that can figure out what is being seen by analyzing brainwaves using the very same implants.

But as soon as processing strength increases (Moore’s Law) and technological advancements are made, even the dampest of signals will be able to be detected, thus allowing for thoughts about things that aren’t right in front of you to be detected. Also important, is the fact that the brain is compartmentalized, meaning that different parts do different things. A specific part handles memory creation, another part handles memory archiving, and other such functions. Now to analyze every type of thought, you would need an electrode in every part of the brain. However, the only thing that can solve this problem is the very same thing that creates every other problem: time.

Unlike AI, there are three main types of electrodes: wet, dry and noncontact. Wet electrodes have traditionally been more accurate, but their adhesive nature creates irritation and discomfort. Because of this, dry electrodes’ popularity has been steadily increasing, as they do not cause as much discomfort. The fundamental concept of dry electrodes is to avoid using a wet gel to get the necessary readings. But the lack of a wet gel gives them a tendency to act up when on-scalp factors change, like skin condition or motion. A practical electrode design has found its champion in noncontact electrodes. Noncontact electrodes will be the electrode of choice for BCI products and innovations, because they have a comfortable and easy implementation, along with accuracy in most situations. To make it even better, they also have the ability to go wireless (Yu M. Chi, Gert Cauwenberghs, 2010).

In the midst of this secret technological revolution, one may ask; “who is bringing about this massive change?” Well, the electrode research is being handled by BrainGate. BrainGate isn’t a Silicon Valley startup: it’s a consortium based at Brown University. And they are by no means rookies to this game. BrainGate was “the first to place implants in the brains of paralyzed people and show that electrical signals emitted by neurons inside the cortex could be recorded, then used to steer a wheelchair or direct a robotic arm” (Regalgado, 2015). The aforementioned wireless electrode implant took a decade of engineering, and it’s still not even approved by the US Food and Drug Administration.

Regulatory setbacks aside, BrainGate is the major pioneer behind this technology. Their current product has worked for a full year in pigs and macaques, and it still performs its task; albeit at a lower quality. The electrode is now able to transmit tens of megabits of data per second. The data is transmitted wirelessly to a nearby receiver, which then processes the data (Rojahn, 2013). In the future, one of the main priorities of the team is to improve its longevity. Data transfer speeds will also increase, and the accuracy will be honed down to pinpoint accuracy. The team is working day and night to improve this technology, and they are rapidly approaching FDA approval for usage in humans.

A Prosperous Horizon

This isn’t 1900. The times have changed. Inventions and breakthroughs aren’t the product of talented or lucky individuals, but the product of skilled teams. And there are many teams out there now. This technology has massive potential, which can soon be awakened.

Scientific Consensus

Next, there is a multitude of medical benefits which stem directly from BCI advancements. These have already been predicted in full scope by the scientific community. To some, it can help restore function which may have been disabled in any way. If someone lost the ability to communicate verbally, this technology will be able to restore the oft taken for granted ability of communication. This can be paired with existing technologies to create a beautiful fusion of innovation and care.

Planned Medical Applications

Let’s think what we already have. We have mobile screens with a day-long battery. We have technologies that can speak English fluently. And soon we are going to have technology that can analyze one’s basic thoughts. We have all these ingredients, so let’s bake the cake! If a mentally able person were to develop a muscular dystrophy illness, their verbal communication can be restored. In fact, Stephen Hawking is a part of a landmark study that is working to accomplish this very thing (Glendinning 2013). This topic will be further elaborated on later.

Also, many people suffering from neurological ailments could benefit greatly from developments in brain-computer interfaces. These developments can restore lost function to those suffering, as well as improve existing function (Stanford Neurosciences Institute, 2017).

Helwan University in Cairo, after years of dedicated research, released an informative review of this technology. The researchers concluded, Mental state monitoring function of BCI systems has also contributed in forecasting and detecting health issues such as abnormal brain structure (such as brain tumor), Seizure disorder (such as epilepsy), Sleep disorder (such as narcolepsy), and brain swelling (such as encephalitis). Tumor, which is generated from uncontrolled self-dividing of cells, could be discovered using EEG as a cheap secondary alternative for MRI and CT-SCAN. EEG-based Brain tumors detection systems have been the main subject of the researches, while (it)has been concerned with identifying breast cancer using EEG signals.” The same researchers believe that EEGs can detect dyslexia, as well as many common sleep disorders (Abdulkader, Atia, Mostafa, 2015). To sum up, EEGs will be able to detect certain tumors, dyslexia, epilepsy, sleeping disorders, and the swelling of the brain.

It is important to note that EEGs detect the brain’s electric signals, aka overall brain activity. During a seizure, one’s brain activity is dramatically altered. However, some epileptics deal with this dissimilar brain activity even when not dealing with a seizure. As for the sleep related conditions, tiredness or the onset of sleep will disrupt typical brain activity (The Epilepsy Society, 2015). The EEGs can then detect any one of these anomalies and make diagnosis easier for medical professionals. More often than not, brain tumors can cause seizures or sleep problems (Cancer.Net, 2017). In this sense, and through straight up tumor searching, EEGs can help detect brain tumors.

One of the more publicized applications of this technology is its potential to alleviate locked-in syndrome. Locked-in syndrome, is a neurological disorder, where by some means or another, one’s entire body is paralyzed with the exception of the eyes. These people retain full mental capability, but are unable to express themselves in any way other than blinking (Jean-Dominique Bauby once wrote an entire book, The Diving Bell and the Butterfly, just by blinking). On a positive note, brain-computer interfaces have the ability to restore communication to those suffering from locked-in syndrome. The EEGs will detect the thoughts of a person, and then transfer the data to a legible platform, which will display the person’s thoughts or ideas.

Recommendations

After researching the current state and possible future improvements of the brain-computer interface (BCI) technology, this study has made a few recommendations which coincide with ongoing technological gains.

First, the current rudimentary BCI technology itself can only be overhauled if government and industry increase basic and applied research funding to this process. This technology has a potential to revolutionize human communication experience and billions of dollars of profit, so financial investment is going to be profitable. The right amount of funding could create a positive feedback loop of development and profits. At the end of the loop, provided that it happens, a few important innovations could become available. One of them, is that the electrodes will soon have the ability to detect the smaller signals in the brain, and have a wider reach. This means that more commands can be identified and then carried out. Also, it means that fewer electrodes can analyze the entire brain. They will also have built-in powerful computers, thus eliminating the need for a control center for the receiver. This change is akin to the recent advancements in processing power. What used to take up an entire room can now fit in the palm of your hand. While not possible yet, if the exponential growth in technological advancements continues, there would be no need for any dangerous implants. When the electrodes eventually get to the point where their range can rival that of a low-end internet router, the technology will hit a new high. That is mostly important because of one reason. People would undoubtedly be repulsed at getting neurological electrodes implanted.

Second, as a last necessary step to achieve this universal technology, security researchers need to continue improving their methods, and always stay at least one step ahead of those with malicious intent. The way it usually works, is that a new security measure is introduced: it gets infiltrated or a weakness is discovered in it, but by then a new measure is already in place. This game of cat and mouse has already become the norm in the early 21st century, and there is nothing to suggest that this dynamic will change. This fragile balance can be maintained as long as security endeavors continue to receive generous amounts of funding. After these improvements BCI technology will cease to scare away regular people, and thus encourage the worldwide spread of this universal technology.

Third, cooperation and research between disciplines needs to be not only encouraged, but required. The creation of a BCI requires inter-disciplinary research, as it necessitates expertise in neuroscience and computer science. But that is just its creation. Brain computer interfaces have a mammoth number of applications, many of which are not directly tied to neuroscience or computer science. In order for this technology to reach its maximum potential, insights and ideas must be shared from a multitude of disciplines and organizations. Only once BCIs become a melting pot of ideas will it become a truly universal technology.

Fourth, this technology should be tailored to the consumer, and tied in with the advance of smart home technology. If this is followed through, it could create a beautiful new world, one previously only dreamed of. Picture this: you wake up to the familiar shriek of your alarm. As you stand up to get out of bed, you think about turning the clock off and it silences itself. On your way to practice healthy oral habits, you remember coffee, and the machine turns on. The faucets and showers turn on when you want them, and automatically adjust their temperature depending on how you’re feeling. Then, you make your way downstairs. You feel a breeze, and react to it slightly. The temperature in your current room then increases itself. The cool breeze reminds you of your brother, who is on vacation in Switzerland for his birthday. You then mentally queue up a messaging service, which you use to tell him happy birthday.

Fifth, these devices must have a negligible radiation footprint, and they must emit very little radiation. There is a high chance that wireless technology is developed so that implants aren’t necessary. Once again, liken this technology to your smartphone. There is a notable portion of the population which firmly believes that the radiation emitted is cancerous. While this segment is usually dismissed as tinfoil hatters, they have not been disproven. The technology is too new for the long-term effects of it to be documented. After a few decades, everyone is going to know whether or not the radiation is cancerous or not. This very same risk is present in the wireless brain-computer interface sensors. The communication needed for it to work is the very same that is found in smartphones. There is a possibility of a health risk, but the thing about it is no one knows for sure. This possible risk can be mitigated if the emissions are kept at a bare minimum.

Lastly, there has to be an effective education and public awareness campaign if this technology is to become universal. Just as people were scared to use the telephone when it first came out, the masses need to be informed of the true facts of this technology so that they are not reluctant to try it. But it’s not just the American, or European masses that need to be up to speed, its the global population. To become universal, it has to be in use all across the world, which this awareness campaign can help bring about. The technology will have a much easier time spreading if people know that it’s not particularly harmful, very secure, become economical, and ripe with potential.

Technological Challenges

There’s always a catch. Even if all of the recommendations are followed, a lot can change in a very short amount of time. The recommendations need to be updated periodically to stay relevant. One of the only possible flaws present in this utopia would be security. And frankly, as security progresses, so does infiltration. It would be about as safe as your smartphone is. There is of course a risk of it being hacked, but that doesn’t stop most smartphone owners from using their phones, now does it?

Also, everyone’s body is vastly different. During the period of development in which implants are still needed for brain-computer interfaces, people’s bodies may very well have different reactions. One’s body may reject the implant, for another, the body may not heal properly after the implantation, or it could outright cause an infection. Imagine that some bad bacterium slipped through during the typical medical procedures. The doctors would have inadvertently shipped tons of bacterium directly to the brain. Let’s not forget that surgery is, by itself, inherently risky. Every single one of these scenarios is intrinsically dangerous.

Conclusions

Speaking of augmentation, BCIs have the potential to vastly change billions of lives. Hearken back to the beginning of this paper which alluded to Ali Baba and The Forty Thieves. In the story, the downfall of the robbers was the fact that Ali Baba had discovered the secret phrase needed to gain access to the cave. Well, what if the leader of the bandits just had to think about moving the door for it to open rather than speak it aloud? This example illustrates that in the technological struggle between voice and brain computer interfaces, one is supreme. The commands that are going to be executed are simple boolean commands. The brain-computer interface will be refined enough to be able to differentiate an “on or off” for specific options. A world with this amount of ease, which was not even dreamed about in The Jetsons, can, and will be our reality.

To conclude, the technology and its associated functions are all progressing well and naturally. So far smart homes are starting to spread, and the medical field is already preparing for the advent of this technology. All in all, this is just one possible way the future may pan out. It’s a beautiful world we live in, and it’s changing at a faster rate than ever. Once this is reached, we will be in possession of a truly universal technology. The road ahead is of Ali Baban proportions, and is exceptionally bright.

Works Cited

  1. Abdulkader, Sarah N., Ayman Atia, and Mostafa-Sami M. Mostafa. “Brain Computer Interfacing: Applications and Challenges.” Brain Computer Interfacing: Applications and Challenges. Helwan University, 20 Dec. 2015. Retrieved December 3 2016, from http://www.sciencedirect.com/science/article/pii/S1110866515000237
  2. “Appendix. The Story of ?Ali Baba and the Forty Thieves. 1909-14. Stories from the Thousand and One Nights. The Harvard Classics.” Appendix. The Story of ?Ali Baba and the Forty Thieves. 1909-14. Stories from the Thousand and One Nights. The Harvard Classics. Available at: http://www.bartleby.com/16/905.html (Accessed: 30 November 2016)
  3. Aulakh, Raveena. “Dying Languages: Scientists Fret as One Disappears Every 14 Days | Toronto Star.” Thestar.com. N.p., 15 Apr. 2013. Available at: https://www.thestar.com/news/world/2013/04/15/dying_languages_scientists_fret_as_one_disappears_every_14_days.html (Acessed: 30 November 2016)
  4. “Brain-Machine Interface.” Brain-Machine Interface | Neurosciences Institute. Stanford Neurosciences Institute, Available at: https://neuroscience.stanford.edu/initiatives/big-ideas-neuroscience/brain-machine-interface (Accessed: 3 December 2016)
  5. “Dynamic Microphones.” Dynamic Microphones. MediaCollege.com, Available at: http://www.mediacollege.com/audio/microphones/dynamic.html (Accessed: 30 November 2016)
  6. Ghose, Tia. “Mind-Reading Computer Instantly Decodes People’s Thoughts.” Www.livescience.com. N.p., 29 Jan. 2016. Available at: http://www.livescience.com/53535-computer-reads-thoughts-instantaneously.html (Accessed: 2 November 2016)
  7. Glendinning, Diana. “References:.” Rutgers University Libraries – Health Sciences – Brain-Computer Interfaces. N.p., 2013. Available at: http://libraries.rbhs.rutgers.edu/rwjlbweb/posters/brain.html (Accessed: 30 November 2016)
  8. Grabianowski, Ed. “How Brain-computer Interfaces Work.” HowStuffWorks. N.p., 02 Nov. 2007 Available at: http://computer.howstuffworks.com/brain-computer-interface.htm
  9. History.com Staff. “MORSE CODE & THE TELEGRAPH.” History.com. A&E Television Networks, 2009 Available at: http://www.history.com/topics/inventions/telegraph (Accessed: 2 November 2016)
  10. Isaacson, Walter. The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution. New York: Simon & Schuster, 2014. Print.
  11. Knight, Will. “Baidu’s Deep-Learning System Rivals People at Speech Recognition.” MIT Technology Review. N.p., 16 Dec. 2015. Available at:https://www.technologyreview.com/s/544651/baidus-deep-learning-system-rivals-people-at-speech-recognition/ (Accessed: 30 November 2016)
  12. “DOSITS: What Is Sound?” DOSITS: What Is Sound? N.p., n.d. Available at: http://www.dosits.org/science/sound/whatissound/ (Accessed: 30 November 2016)
  13. Peters, Betts, and Melanie Fried-Oken,. “Brain-Computer Interface (BCI).” ALSA.org. The ALS Association, Sept. 2014. Retrieved on 30 November 2016, from: http://www.alsa.org/als-care/resources/publications-videos/factsheets/brain-computer-interface.html?referrer=https://www.google.com/
  14. Regalado, Antonio. “A Brain-Computer Interface That Works Wirelessly.” MIT Technology Review. N.p., 14 Jan. 2015. Retrieved on 30 November 2016, from: https://www.technologyreview.com/s/534206/a-brain-computer-interface-that-works-wirelessly/
  15. Shechmeister, Mathew. “Birth of the Microphone: How Sound Became Signal.” Wired. Conde Nast, 11 Jan. 2011. Available at: https://www.technologyreview.com/s/534206/a-brain-computer-interface-that-works-wirelessly/https://www.wired.com/2011/01/birth-of-the-microphone/ (Accessed: 30 November 2016)
  16. Woodford, Chris. “How Does Voice Recognition Software Work?” Explain That Stuff. N.p., 07 June 2015. Retrieved on 30 November 2016, from: http://www.explainthatstuff.com/voicerecognition.html
  17. Burrus, Daniel. “Artificial Intelligence: The Promise of Limitless Possibilities.” Daniel Burrus. N.p., 13 Apr. 2016. Available at: http://www.burrus.com/2016/04/artificial-intelligence-promise-limitless-possibilities/ (Accessed: 23 Dec. 2016)
  18. Tegmark, Max. “Benefits & Risks of Artificial Intelligence.” Future of Life Institute. Future of Life Institute, n.d. Available at: https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/ (Accessed: 23 Dec. 2016)
  19. “AI Horizon: Introduction to Artificial Intelligence.” AI Horizon: Introduction to Artificial Intelligence. AI Horizon, n.d. Available at: http://www.aihorizon.com/essays/generalai/intro.htm (Accessed: 29 Jan. 2017)
  20. Singer, Emily. “Better Brain-Wave Analysis.” MIT Technology Review. MIT Technology Review, 22 Oct. 2012. Web. 05 Dec. 2016. <https://www.technologyreview.com/s/413596/better-brain-wave-analysis/>.
  21. Geva, Amir. “Better Brain-Wave Analysis.” AABGU. AABGU, 26 May 2009. Web. 07 Mar. 2017. <https://aabgu.org/better-brain-wave-analysis/>.
  22. Chi, Yu M., and Gert Cauwenberghs. “Wireless Non-contact EEG/ECG Electrodes for Body Sensor Networks.” Integrated Systems Neuroengineering (2010): n. pag. Integrated Systems Neuroengineering University of California San Diego. Web. <http://www.isn.ucsd.edu/pubs/bsn10.pdf>.
  23. Rojahn, Susan Young. “A Wireless Brain-Computer Interface.” MIT Technology Review. MIT Technology Review, 16 Mar. 2016. Web. 20 Mar. 2017. <https://www.technologyreview.com/s/512161/a-wireless-brain-computer-interface/>.
  24. “EEG (electroencephalogram).” Epilepsy Society. Epilepsy Society, n.d. Web. 23 Mar. 2017. <https://www.epilepsysociety.org.uk/eeg-electroencephalogram#.WNRcKvkrJPY>.
  25. “Brain Tumor – Symptoms and Signs.” Cancer.Net. N. p., 16 Sept. 2016. Web. 23 Mar. 2017. <http://www.cancer.net/cancer-types/brain-tumor/symptoms-and-signs>.
  26. Harris, Tom. “How Robots Work.” HowStuffWorks Science. HowStuffWorks, 16 Apr. 2002. Web. 24 Mar. 2017. <http://science.howstuffworks.com/robot6.htm>.

LEAVE A REPLY

Please enter your comment!
Please enter your name here