Welcome
To
Mechatronics World

Tuesday, 29 November 2016

Brain-computer interface

Brain Image Gallery See more brain pictures.
Brain Image 

As the power of modern computers grows alongside our understanding of the human brain, we move ever closer to making some pretty spectacular science fiction into reality. Imagine transmitting signals directly to someone's brain that would allow them to seehear or feel specific sensory inputs. Consider the potential to manipulate computers or machinery with nothing more than a thought. It isn't about convenience -- for severely disabled people, development of a brain-computer interface (BCI) could be the most important technological breakthrough in decades. In this article, we'll learn all about how BCIs work, their limitations and where they could be headed in the future.

The Electric Brain

The reason a BCI works at all is because of the way our brains function. Our brains are filled with neurons, individual nerve cells connected to one another by dendrites and axons. Every time we think, move, feel or remember something, our neurons are at work. That work is carried out by small electric signals that zip from neuron to neuron as fast as 250 mph [source: Walker]. The signals are generated by differences in electric potential carried by ions on the membrane of each neuron.


Although the paths the signals take are insulated by something called myelin, some of the electric signal escapes. Scientists can detect those signals, interpret what they mean and use them to direct a device of some kind. It can also work the other way around. For example, researchers could figure out what signals are sent to the brain by the optic nerve when someone sees the color red. They could rig a camera that would send those exact signals into someone's brain whenever the camera saw red, allowing a blind person to "see" without eyes.


Monday, 28 November 2016

Auditory nerves control

Sensory Input
   
Dr. Peter Brunner demonstrates the brain-computer interface at a conference in Paris.
Dr. Peter Brunner demonstrates the brain-computer interface at a conference in Paris.













The most common and oldest way to use a BCI is a cochlear implant. For the average person, sound waves enter the ear and pass through several tiny organs that eventually pass the vibrations on to the auditory nerves in the form of electric signals. If the mechanism of the ear is severely damaged, that person will be unable to hear anything. However, the auditory nerves may be functioning perfectly well. They just aren't receiving any signals.
A cochlear implant bypasses the nonfunctioning part of the ear, processes the sound waves into electric signals and passes them via electrodes right to the auditory nerves. The result: A previously deaf person can now hear. He might not hear perfectly, but it allows him to understand conversations.
The processing of visual information by the brain is much more complex than that of audio information, so artificial eye development isn't as advanced. Still, the principle is the same. Electrodes are implanted in or near the visual cortex, the area of the brain that processes visual information from the retinas. A pair of glassesholding small cameras is connected to a computer and, in turn, to the implants. After a training period similar to the one used for remote thought-controlled movement, the subject can see. Again, the vision isn't perfect, but refinements in technology have improved it tremendously since it was first attempted in the 1970s. Jens Naumann was the recipient of a second-generation implant. He was completely blind, but now he can navigate New York City's subways by himself and even drive a car around a parking lot [source: CBC News]. In terms of science fiction becoming reality, this process gets very close. The terminals that connect the camera glasses to the electrodes in Naumann's brain are similar to those used to connect the VISOR (Visual Instrument and Sensory Organ) worn by blind engineering officer Geordi La Forge in the "Star Trek: The Next Generation" TV show and films, and they're both essentially the same technology. However, Naumann isn't able to "see" invisible portions of the electromagnetic spectrum.
On the next page, find out about the inherent limitations of brain-computer interfaces -- and also learn about some exciting innovations.

Sunday, 27 November 2016

How Digital Immortality Works




One vision of the future includes conquering death with technology. But how can we do it?
One vision of the future includes conquering death with technology. But how can we do it?













Humans have been chasing immortality for millennia. In some cultures, you attain a kind of immortality by doing great deeds, which people will talk about long after you pass away. Several religions feature some concept of immortality -- the body may die but some part of you will exist forever. But what if science made it possible to be truly immortal? What if there were a way for you to live forever?
That's the basic concept behind digital immortality. Some futurists, perhaps most notably inventor Ray Kurzweil, believe that we will uncover a way to extend the human lifespan indefinitely. They've identified several potential paths that could lead to this destination. Perhaps we'll identify the genes that govern aging and tweak them so that our bodies stop aging once they reach maturity. Maybe we'll create new techniques for creating artificial organs that combine organic matter with technology and then replace our original parts with the new and improved versions. Or maybe we'll just dump our memories, thoughts, feelings and everything else that makes us who we are into a computer and live in cyberspace.

Saturday, 26 November 2016

Digitizing human consciousness

In theory, we’re only a few decades away from digital versions of ourselves.
In theory, we’re only a few decades away from digital versions of ourselves.

Dmitry Itskov predicts that in the year 2045, humans will be able to back themselves up to the cloud. Yep, he believes you'll be able to create a digital version of your human consciousness, stored in a synthetic brain and an artificial host.

  1. Itskov, a Russian entrepreneur, media mogul and billionaire, plans to live forever — and he plans to take all of us along for the ages, as holograms. His project, called the 2045 Initiative, is named for the year he predicts he'll complete the final milestone in digitizing human consciousness. While substrate-independent minds (mind-uploading) may be a new reality to the next generation, the human desire for immortality certainly isn't.

Friday, 25 November 2016

Thought Control?

BCI Drawbacks and Innovators
   
Two people in Germany use a brain-computer interface to write "how are you?"
Two people in Germany use a brain-computer interface to write "how are you?"















Although we already understand the basic principles behind BCIs, they don't work perfectly. There are several reasons for this.
  1. The brain is incredibly complex. To say that all thoughts or actions are the result of simple electric signals in the brain is a gross understatement. There are about 100 billion neurons in a human brain [source: Greenfield]. Each neuron is constantly sending and receiving signals through a complex web of connections. There are chemical processes involved as well, which EEGs can't pick up on.
  2. The signal is weak and prone to interference. EEGs measure tiny voltage potentials. Something as simple as the blinking eyelids of the subject can generate much stronger signals. Refinements in EEGs and implants will probably overcome this problem to some extent in the future, but for now, reading brain signals is like listening to a bad phone connection. There's lots of static.
  3. The equipment is less than portable. It's far better than it used to be -- early systems were hardwired to massive mainframe computers. But some BCIs still require a wired connection to the equipment, and those that are wireless require the subject to carry a computer that can weigh around 10 pounds. Like all technology, this will surely become lighter and more wireless in the future.
  4. Thought Control?

    If we can send sensory signals to someone's brain, does that mean thought control is a something we need to worry about? Probably not. Sending a relatively simple sensory signal is difficult enough. The signals necessary to cause someone to take a certain action involuntarily is far beyond current technology. Besides, erstwhile thought-controllers would need to kidnap you and implant electrodes in an extensive surgical procedure, something you'd likely notice.

BCI Innovators

  • Neural Signals is developing technology to restore speech to disabled people. An implant in an area of the brain associated with speech (Broca's area) would transmit signals to a computer and then to a speaker. With training, the subject could learn to think each of the 39 phonemes in the English language and reconstruct speech through the computer and speaker [source: Neural Signals].
  • NASA has researched a similar system, although it reads electric signals from the nerves in the mouth and throat area, rather than directly from the brain. They succeeded in performing a Web search by mentally "typing" the term "NASA" into Google [source: New Scientist].
  • Cyberkinetics Neurotechnology Systems is marketing the BrainGate, a neural interface system that allows disabled people to control a wheelchair, robotic prosthesis or computer cursor [source: Cyberkinetics].
  • Japanese researchers have developed a preliminary BCI that allows the user to control their avatar in the online world Second Life [source: Ars Technica].

Thursday, 24 November 2016

controllable robotic ray

Genetically Engineered Rat Cells Make This Robot Stingray Swim

This fully controllable robotic ray is powered by a gold skeleton and light-activated cells Robots have advanced an enormous amount over the past few years, but they’re nowhere close to the efficiency and capability of animals. One way to avoid playing catch-up is to simply steal everything you can from animals as directly as possible. Which is exactly what a team of researchers, led by Sung-Jin Park and Professor Kevin Kit Parker at the Wyss Institute for Biologically Inspired Engineering at Harvard did.

Wednesday, 23 November 2016

Neurorobotics Research Laboratory

Robots

Over the years we have developed many robots, each dedicated to quite different scientific questions. Despite the diversity (or perhaps just because of the diversity), some key concepts have emerged, which we regard as essential, and which we combined without compromise in our latest humanoid robot Myon.

Myon


The modular robot Myon was completely conceived, developed and built within the context of the European project ALEAR. Myon consists of five identical humanoid robots. Through intensive collaboration with Frackenpohl Poulheim and Bayer MaterialScience, Myon and its four congeners could be furnished with an exterior skin which allows both rational (fall protection, grip surfaces) and emotional (acceptance, reducing fears) functions to be fulfilled.
(further details)

Semni


Egyptian Semni literally means to establish oneself - and this is precisely the goal of the self-exploring multi-neural individual. Semni palpates itself and its environment in a careful manner and to discover what motor actions are effective and which ones cost an enormous amount of energy. With time, Semni tries increasingly acrobatic movements. He learns to have fun with his own body control.

A-Series


The A-series, consisting of five robots, is based on the modular system Bioloid of the company Robotis that has been extended by specially developed AccelBoards, which are able to process sensorimotor processing data in a distributed manner. The AccelBoards are attached to the robot at various body extremities, so that the different body regions can be detected by sensors.

Oktavio


The walking machine Oktavio consists of a body and eight legs of identical design, which are attached by snap closure at the trunk. Jointly developed and built by Dr. Manfred Hild and Torsten Siedel, the robot is 150cm long, 100cm wide and 30cm high. Oktavio was the first distributed system that works without a central processing unit. Essential concepts of Oktavio are used in advanced form for Myon.

Do:Little


Because of its ability for acoustic communication, the Do:Little received its name in line with Eliza Doolittle from Shaw's Pygmalion. The Do:Little governs not only the directional hearing - he also possesses unusual gustatory and proprioceptive sensors. Using bronze coloured spring contacts at the front and silver contact strips around the body, several Do:Littles can help out each other with energy and also recognize this.

TED


The acronym TED stands for Two-degrees-of-freedom Experimental Device and refers to a robot held minimal in all respects which is however still able to move on legs. TED was created primarily as a demonstration and experimenting object. It only takes a few pieces and an afternoon to assemble a TED on your own, to bend individual leg shapes and install your self-created neural network.

Universal Gripper


Modeled after his own hand, the universal gripper was developed by Torsten Siedel and was enhanced by Dr. Manfred Hild with electronics and neural control. It is controlled intuitively with a specially made data glove via haptic feedback: you can feel it with your own hand if the universal gripper grasps and holds something.

Monday, 21 November 2016

Tele-rehabilitation

Tele-rehabilitation
Telerehabilitation (or e-rehabilitation) is the delivery of rehabilitation services over telecommunication networks and the internet. Most types of services fall into two categories: clinical assessment (the patient's functional abilities in his or her environment), and clinical therapy. Some fields of rehabilitation practice that have explored telerehabilitation are: neuropsychology, speech-language pathology, audiology, occupational therapy, and physical therapy. Telerehabilitation can deliver therapy to people who cannot travel to a clinic because the patient has a disability or because of travel time. Telerehabilitation also allows experts in rehabilitation to engage in a clinical consultation at a distance.
Most telerehabilitation is highly visual. As of 2006 the most commonly used modalities are via webcamsvideoconferencingphone linesvideophones and webpages containing rich Internet applications. The visual nature of telerehabilitation technology limits the types of rehabilitation services that can be provided. It is most widely used for neuropsychological rehabilitation; fitting of rehabilitation equipment such as wheelchairsbraces or artificial limbs; and in speech-language pathology. Rich internet applications for neuropsychological rehabilitation (aka cognitive rehabilitation) of cognitive impairment (from many etiologies) was first introduced in 2001. This endeavor has recently (2006) expanded as a teletherapy application for cognitive skills enhancement programs for school children. Tele-audiology (hearing assessments) is a growing application. As of 2006, telerehabilitation in the practice of occupational therapy and physical therapy are very limited, perhaps because these two disciplines are more "hands on".
Two important areas of telerehabilitation research are (1) demonstrating equivalence of assessment and therapy to in-person assessment and therapy, and (2) building new data collection systems to digitize information that a therapist can use in practice. Ground-breaking research in telehaptics (the sense of touch) and virtual reality may broaden the scope of telerehabilitation practice, in the future.
In the United States, the National Institute on Disability and Rehabilitation Research's (NIDRR)[3] supports research and the development of telerehabilitation. NIDRR's grantees include the "Rehabilitation Engineering and Research Center" (RERC) at the University of Pittsburgh, the Rehabilitation Institute of Chicago, the State University of New York at Buffalo, and the National Rehabilitation Hospital in Washington DC. Other federal funders of research are the Veterans Administration, the Health Services Research Administration in the US Department of Health and Human Services, and the Department of Defense. Outside the United States, excellent research is conducted in Australia and Europe.

As of 2006, only a few health insurers in the United States will reimburse for telerehabilitation services. If the research shows that tele-assessments and tele-therapy are equivalent to clinical encounters, it is more likely that insurers and Medicare will cover telerehabilitation services.
● Upper-limb robot-assisted rehabilitation
● Locomotion robot-assisted rehabilitation
● Respiratory tele-rehabilitation and tele-monitoring

Sunday, 20 November 2016

Rehabilitation robotics

Rehabilitation robotics

Rehabilitation robotics is a field of research dedicated to understanding and augmenting rehabilitation through the application of robotic devices. Rehabilitation robotics includes development of robotic devices tailored for assisting different sensorimotor functions[1](e.g. arm, hand,[2][3] leg, ankle[4]), development of different schemes of assisting therapeutic training,[5] and assessment of sensorimotor performance (ability to move)[6] of patient; here, robots are used mainly as therapy aids instead of assistive devices.[7]Rehabilitation using robotics is generally well tolerated by patients, and has been found to be an effective adjunct to therapy in individuals suffering from motor impairments, especially due to stroke.

Saturday, 19 November 2016

Artificial sense of touch


Artificial sense of touch

A new study led by University of Chicago neuroscientists brings them one step closer to building prosthetic limbs for humans that recreate a sense of touch through a direct interface with the brain.
The research, published Oct. 26 in the Proceedings of the National Academy of Sciences, shows that artificial touch is highly dependent on several features of electrical stimuli, such as the strength and frequency of signals. It describes the specific characteristics of these signals, including how much each feature needs to be adjusted to produce a different sensation.
“This is where the rubber meets the road in building touch-sensitive neuroprosthetics,” said Sliman Bensmaia, associate professor of organismal biology and anatomy and senior author of the study. “Now we understand the nuts and bolts of stimulation, and what tools are at our disposal to create artificial sensations by stimulating the brain.”
Bensmaia’s research is part of Revolutionizing Prosthetics, a multi-year Defense Advanced Research Projects Agency project that seeks to create a modular, artificial upper limb that will restore natural motor control and sensation in amputees. The project has brought together an interdisciplinary team of experts from government agencies, private companies and academic institutions, including the Johns Hopkins University Applied Physics Laboratory and the University of Pittsburgh.
Bensmaia and his UChicago colleagues are working specifically on the sensory aspects of these limbs. For this study, monkeys, whose sensory systems closely resemble those of humans, had electrodes implanted into the area of the brain that processes touch information from the hand. The animals were trained to perform two perceptual tasks: one in which they detected the presence of an electrical stimulus, and a second in which they indicated which of two successive stimuli was more intense.
During these experiments, Bensmaia and his team manipulated various features of the electrical pulse train, such as its amplitude, frequency and duration, and noted how the interaction of each of these factors affected the animals’ ability to detect the signal.
Of specific interest were the “just-noticeable differences,” or the incremental changes needed to produce a sensation that felt different. For instance, at a certain frequency, the signal may be detectable first at a strength of 20 microamps of electricity. If the signal has to be increased to 50 microamps to notice a difference, the JND in that case is 30 microamps.
The sense of touch is really made up of a complex and nuanced set of sensations, from contact and pressure to texture, vibration and movement. By documenting the range, composition and specific increments of signals that create sensations that feel different from each other, Bensmaia and his colleagues have provided the “notes” scientists can play to produce the “music” of the sense of touch in the brain.
“When you grasp an object, for example, you can hold it with different grades of pressure. To recreate a realistic sense of touch, you need to know how many grades of pressure you can convey through electrical stimulation,” Bensmaia said. “Ideally, you can have the same dynamic range for artificial touch as you do for natural touch.”


The study has important scientific implications beyond neuroprosthetics as well. In natural perception, a principle known as Weber’s Law states that the just-noticeable difference between two stimuli is proportional to the size of the stimulus. For example, with a 100-watt light bulb, you might be able to detect a difference in brightness by increasing its power to 110 watts. The JND in that case is 10 watts. According to Weber’s Law, if you double the power of the light bulb to 200 watts, the JND would also be doubled to 20 watts.
However, Bensmaia’s research shows that, with electrical stimulation of the brain, Weber’s Law does not apply—the JND remains nearly constant, no matter the size of the stimulus. This means that the brain responds to electrical stimulation in a much more repeatable, consistent way than through natural stimulation.
“It shows that there is something fundamentally different about the way the brain responds to electrical stimulation than it does to natural stimulation,” Bensmaia said.
“This study gets us to the point where we can actually create real algorithms that work. It gives us the parameters as to what we can achieve with artificial touch, and brings us one step closer to having human-ready algorithms.”

The study, “Behavioral assessment of sensitivity to intracortical microstimulation of primate somatosensory cortex,” was supported by the Defense Advanced Research Projects Agency Contract N66001-10-C-4056. Additional authors include Sungshin Kim, Thierri Callier and Gregg Tabot from the University of Chicago, Robert Gaunt from the University of Pittsburgh, and Francesco Tenore from Johns Hopkins University.

Friday, 18 November 2016

Brain-to-Brain Interface

Scientists at the University of Washington have successfully completed what is believed to be the most complex human brain-to-brain communication experiment ever. It allowed two people located a mile apart to play a game of "20 Questions" using only their brainwaves, a nearly imperceptible flash of light, and an internet connection to communicate.
Brain-to-brain interfaces have gotten much more complex over the last several years. Miguel Nicolelis, a researcher at Duke University, has even created "organic computers" by connecting the brains of several rats and chimps together.

But in humans, the technology remains pretty basic, primarily because the most advanced brain-to-brain interfaces require direct access to the brain. We're not exactly willing to saw open a person's skull in the name of performing some rudimentary tasks for science.
Using two well-known technologies, electroencephalography EEG and transcranial magnetic stimulation (TMS), Andrea Stocco and Chantel Prat were able to increase the complexity of a human brain to brain interface.
The EEG was used to read one person's brain waves, while the TMS machine was used to create a "phosphene"—a ring of light perceptible only to the wearer—on the other.
In the experiment, the EEG wearer was shown an object—a shark, for instance. The TMS wearer then used a computer mouse to ask a question of the EEG wearer—maybe "can it fly?" The EEG wearer then focused on a screen flashing either "yes" or "no." The brain waves were then read and transferred via the internet. If the answer was "yes," the TMS wearer would see a phosphene, suggesting he or she was on the right track to guessing the object.
The whole setup looks something like this:
Image: PLOS One
In 72 percent of games, the guesser was able to eventually get to the correct object. In a control group, just 18 percent of guessers were.
As I mentioned, both of these technology are well-known and are used in medical settings regularly. There is perhaps no totally new technological breakthrough here, but it's a clever way of hooking neurological devices to each other to complete a task. In a paper published in PLOS One, Stocco and Prat write that the task is "collaborative, potentially open-ended, operates in real-time, and requires conscious processing of incoming information."
"Because phosphenes are private to the receiver and can be perceived under a variety of conditions, or even while performing other actions, they represent a more versatile and interesting means of transferring information than [previous brain-to-brain interfaces]," they wrote.
Neither Stocco nor Pratt were able to talk to me today because they were in the process of moving offices, but in a press release published by the university, they suggest that future experiments could be less gimmicky and more therapeutic. Future experiments will connect the brains of someone who suffers from ADHD and someone who doesn't to see if it's possible to induce the ADHD student to a higher state of attention.
Such experiments align closely with Nicolelis's work at Duke: He told me that he believes it may be possible to connect a healthy brain with a stroke-damaged one to induce the damaged brain to perform at a higher level, which he thinks may have lasting effects.
In its current state, that's a best-case scenario for brain-to-brain interfaces. Right now, it makes more sense to, say, type your thoughts to someone rather than communicate them using brain waves and fancy machines. So, in that sense, we may be creating brain stimulation devices moreso than communication devices.

Thursday, 17 November 2016

Blue Brain and the Human Brain Project

Blue Brain and the Human Brain Project

The central vision for the Human Brain Project (HBP) - reconstructing and simulating the human brain - was developed by Henry Markram, based on the research strategy developed in the Blue Brain Project. 

Starting in 2010, Markram created and coordinated the consortium of 80 European and International partners that developed the original HBP project proposal. In January 2013, after multiple rounds of peer review, the project was selected as one of two FET Flagships, to be funded with 1 billion euro over 10 years. The project began operations in October of the same year. The project currently includes 112 partners.
In forming the Human Brain Project consortium, Markram invited two other novel research areas to join the project: medical informatics, led by Richard Frackowiak, and neuromorphic computing led by Karlheinz Meier. Other key partners were also invited to contribute to the project's three core research areas, which were subsequently labelled as Future Neuroscience, Future Medicine and Future Computing. 
The goal of the Human Brain Project is to use "ICT as a catalyst for a global collaborative effort to understand the human brain and its diseases and ultimately to emulate its computational capabilities". In the current ramp-up phase, the Blue Brain team preserves the original vision of reconstructing and simulating the brain, focusing on Neuroinformatics, Brain Simulation, High Performance Computing, and Neurorobotics.

Saturday, 5 November 2016

wall climbing robot moving through ceiling


wall climbing robot moving through ceiling.
This robot was developed by us.
Its weight is 1.2 kg, the existing wall climbing robots are with less weight & high cost but our robot frame is made up of aluminium sheet, which will reduce the cost of the robot.
For sticking the robot to the wall vacuum producing impeller are used which is modified by us.
We modified the four wheels for providing proper grip and for maintaining the ground clearance.
For ground clearance we used rubber strip.
Our robot can carry extra 1 kg of weight by reducing its frame weight.
APPLICATIONS; #wall_painting
#military_inspection
#spying
#Inspect the bridges, flyovers and water tanks.
#For cleaning high rise building .
#For search and rescue operations.
#Used in construction work and can carry weight from one place to another where human workers are not capable to reach.
#Used for inspection of the construction buildings.
#For finding cracks and damages inside the  pipelines.
#To find the alignment of surface.
#To check the bending on wind mill tower.
#To check the cracks on dam.
#For inspection of nuclear power plant.

wall climbing robot moving through vertical wall.

wall climbing robot moving through vertical wall.
This robot was developed by us.
Its weight is 1.2 kg, the existing wall climbing robots are with less weight & high cost but our robot frame is made up of aluminium sheet, which will reduce the cost of the robot.
For sticking the robot to the wall vacuum producing impeller are used which is modified by us.
We modified the four wheels for providing proper grip and for maintaining the ground clearance.
For ground clearance we used rubber strip.
Our robot can carry extra 1 kg of weight by reducing its frame weight.
APPLICATIONS; #wall_painting
#military_inspection
#spying
#Inspect the bridges, flyovers and water tanks.
#For cleaning high rise building .
#For search and rescue operations.
#Used in construction work and can carry weight from one place to another where human workers are not capable to reach.
#Used for inspection of the construction buildings.
#For finding cracks and damages inside the  pipelines.
#To find the alignment of surface.
#To check the bending on wind mill tower.
#To check the cracks on dam.
#For inspection of nuclear power plant.
#For military and police inspections.

Featured post

Reading Robot Minds with Virtual Reality

Figuring out what other people are thinking is tough, but figuring out what a robot is thinking can be downright impossible. With no bra...

Popular Posts