Applied Science in Mind Reading

DOI : 10.17577/IJERTV9IS060575

Download Full-Text PDF Cite this Publication

Text Only Version

Applied Science in Mind Reading

Anjali Soorej

Integrated Master of Computer Application SCMS School of Technology and Management Cochin, India

AbstractModern technology has led to many new inventions: one such is the innovation of mind-reading computers. The mind is said to be an abstract entity, lack of impairment in the theory of mind is thought to be the primary inhibitor of emotion understanding, and social intelligence in individuals with autism. The goal of building the mind-reading machines is to enable computer technologies to understand and react to peoples emotions and mental states. This enables recognition and reading of complex mental states allowing brain-computer interface interaction. In this paper, I have presented the science behind the mind-reading computers. It is presented from the concept of mind-reading to the techniques used for the same.

Keywords Mind-reading computers, Facial expression, Futuristic headband, Mind-controlled wheelchair, Brain- Computer interface.


    1. Biofeedback

      It is a process that enables an individual to learn how to


        Over the last few decades, there has been a significantly increased interest in improving the quality of interactions between users and computer machines. Researches have been examining and analyzing human beings. Drawing motivation from brain research, PC vision, and AI, from the outset, the group in the Computer Laboratory at the University of Cambridge has created mind understanding PCs. People express their mental states, emotions, thoughts, and desires all the time through facial expressions, vocal nuances, and gestures. This is true even when they are interacting with a machine. The understanding of human thoughts is one of the most complex tasks as no one would know what a person will be doing next by analyzing his present thoughts.


        Mind reading may be defined as inferring human thoughts, emotions, and desires, etc. the simplest method in mind reading is understanding the facial expressions and gestures [6]. Furtherly, studies have been going on for the study of body postures to know more about the mental state of the person. The theory of mind-reading can be said to be the ability to attribute mental states to others from their behavior and use that knowledge to guide our actions and predict those of others. The machine works by using digital video cameras that analyze a person's facial expression in real-time and infer a person's underlying mental state such as thinking, confused, bored, agreeing, disagreeing, happy, sad, and angry. Earlier information on how specific mental states are communicated by the outward appearances and head motions are put away in the database of the machine and afterward coordinated to give the psychological state [4]. The mind-reading system is the coordination of the human psychology field and affective computer techniques.

        Figure 1: Process of mind reading

        change physiological activity to improve health and performance. Biofeedback learns how to recognize the physical signs and symptoms of stress and anxiety, such as heart rate, body temperature, and muscle tension [1]. There are different types of biofeedback and they include breathing, heart rate, galvanic skin response, blood pressure, skin temperature, brain waves, and muscle tension.

    2. Stimulus and Response

    When a subject is given a certain stimulus, the brain will automatically produce a measurable response so theres no need to train the subject to manipulate specific brain waves.

  2. FUTURISTIC HEADBAND-HOW DOES IT WORK? Futuristic Headband contains a headband which sends

    light emission to the tissues of the head where it is absorbed by active, blood-filled tissues. It measures the volume and oxygen level of the blood around the brain of the subject. This is done by the technology called functional near-infrared spectroscopy (fNIRS) [7]. The headband then measures how much light was absorbed. Then the results are compared to an MRI or it can be gathered with lightweight, non-invasive equipment. Wearing the fNIRS sensor, subjects were asked to count the number of squares on a rotating onscreen cube and to perform other tasks [8]. The specific region of the mind where the bloodstream change happens ought to give signs of the cerebrum's metabolic changes and by augmentation remaining task at hand, which could be an intermediary for

    feelings like dissatisfaction. Estimating mental remaining task at hand, dissatisfaction and interruption is normally restricted to subjectively watching PC clients or to regulating overviews after the culmination of an undertaking, conceivably missing significant knowledge into the client's evolving encounters.

    Figure 2: Futuristic headband(old version)

    Figure 3: Futuristic headband(new version)


    1. Facial Affect Detection

      It is done using a hidden Markov Model, Neural Network processing, or active appearance model. [9] The image of a person is matched bit by bit with the images stored in a database for image detection.

    2. Emotional Classification

      One may distinguish or contrast one emotion from another so that emotions can be characterized by an emotional basis in a group. According to Paul Ekman emotions such as anger, disgust, sadness, surprise, fear, happiness can be classified and emotions such as amusement, contempt, contentment, excitement, guilt, pride, relief, satisfaction, shame cannot be classified. Later Robert Plutchik created a wheel of emotions as it demonstrated how different emotions can blend into one another and create emotions [3]. Later by 2001 Parrot identified over 100+ emotions and conceptualized them as a tree-structured list.

    3. Facial Electromyography

      It is used to measure the electrical activity of the muscles. We get to the PC cursor as per distinctive facial muscle action designs. It generally helps people with disabilities for controlling the movement of the cursor on a computer screen.

    4. Galvanic Skin Response

      It is the measure of skin conductivity, which is dependent on how moist the skin is. Skin conductance is not under conscious control. Instead, it is modulated autonomously by sympathetic activity which drives aspects of human behavior, as well as cognitive and emotional states [2]. The conductance in skin indicates the autonomous emotional regulation. It can be used to validate self-reports, surveys, or interviews of participants within a study.

    5. Blood Volume Pulse

      It is an optical, non-invasive sensor that measures cardiovascular dynamics. It detects changes in the arterial translucency. It is measured by a process called photoplethysmography. As a result, a graph indicating blood flow through the extremities will be depicted.

    6. Head Pose Estimation

      It is an important visual cue in many scenarios such as social event analysis, human-robot interaction, and driver- assistance systems. This estimation determines the interaction between people. It extracts the visual focus of attention.


    Facial recognition is a thriving application of deep learning and combining with pose estimation will result in a powerful match. We must know how the head is tilted concerning a camera. In the application of the driver assistance system, a camera will be examining the drivers face and it can use head pose estimation to see if the driver is paying attentin to the road. Another application is where it can use head pose based gestures to control a hands-free application/game. The position of an object should be subject to its orientation and position to the camera. You can change the orientation and position by either moving the object concerning the camera or vice versa.

    The pose estimation problem described here is often referred to as perspective-n-point problem or PNP in computer vision jargon. In this, the goal is to find the pose of an object when we have a calibrated camera. Also, we know the locations of n 3D points on the object and 2D projections of the same image. To calculate pose estimation of an object in an image you need the following information [5]:-

      • 2D coordinates of a few points: In case of a face you could choose the corners of the eyes, the tip of the nose, corners of the mouth, etc. Dlib's facial landmark detector provides us with many points to choose from, like the tip of the nose, the chin, the corresponding corners of the eyes, the left and right corners of the mouth.

      • 3D locations of the same point: You also need the 3D location of the 2D feature points. Ideally, we need a 3D model of the person in the photo to get the 3D locations but in practice, the generic 3D model will suffice. So, a 3D locations few points are taken from the arbitrary reference frame.

      • Intrinsic parameters of the camera: In this, the camera is assumed to be calibrated. That is the focal length of the camera, the optical center in the image and the radial distortion parameters are known. If the camera is not calibrated, we can approximate the optical

    center by the center of the image, approximate the focal length by the width of the image in pixels by assuming that radial distortion does not exist.

    Figure 4: Head pose estimation

    Head pose estimation use expression-invariant feature to estimate pitch (500), yaw (500), and roll (300) for example estimation of head yaw using the ratio of a left point to the right point of eye widths and estimation of the head roll using the angle between the two inner eye corners. Motion, shape, and color descriptors are some feature points which are identified as facial actions. For a real-time video system motion and shape-based analysis are particularly suitable. Color-based analysis is computationally efficient especially when combined with feature localization.

    For lip shape tracking that identifies for example lip corner pull for smile and lip pucker the polar distance between each of the two mouth corners and the anchor point is computed. Change from the initial frame is observed and the average percentage change is used to discern mouth displays.

    Figure 5: Lip shape tracking

    Color descriptors can tell that whether the mouth is closed or open by differentiating the teeth and aperture by different colors. For example, teeth are represented by green color, and aperture is represented by red color.


    1. PC program which can peruse quietly verbally expressed words by investigating nerve signals in our mouths and throats has been created by NASA. Starter results show that utilizing button-sized sensors, which connect under the jaw and on the sides of Adam's apple, it is conceivable to get and perceive nerve signals and examples outline the tongue and vocal ropes that compare to specific words

    2. Mind reading helps in predicting bankruptcy by measuring various financial distress of public firms. With the analysis of the stress of creditors and investors, we can know in prior that a firm may go bankrupt. This type of prediction is a method of formal analysis.

    3. This can help astronauts to send commands to rovers on other planets and help injured astronauts or physically disabled people to control machines.

    4. Mind reading computers could save your life as devices allow people to write letters or play pinball using just the power of their brains have become a major draw at the world's biggest high-tech fair. Scientists are researching ways to monitor motorist's brain waves to improve reaction times in a crash.

    5. By reading the mind of one we eliminate the capability to lie.

    6. It can control robots by brainpower. Lights flash at different frequencies at the four points of the compass in the small box. The robot is controlled by examining the brainwaves generated by viewing the frequency that is produced when the user focuses on the corresponding light, depending on whether he wants the robot to move up, down, left or right.

    7. By scanning blogs of brain activity, scientists may be able to decode thoughts, dreams, and intentions of the people and then the thoughts can be converted to speech.

    8. Mind reading computers could communicate with coma patients. Researchers used neuroimaging to read human thoughts via brain activity when they are communicating. They say the research could lead to dramatic new ways of attempting to communicate with patients in a vegetative state.

    9. Utilizing this cerebrum wave checking innovation, a vehicle can likewise tell whether the driver is sluggish or not, conceivably cautioning the driver to take a break.

    10. Application of such tools can be used for screening suspected terrorists as well as for predicting future dangerousness more generally (Crime prediction technology). Tapping brains for future crimes as prevention is said to be better than retribution in criminal justice. So, this compels to contemplate a system that manages and reduces the risk of criminal behavior in the first place. This reduces the risk of criminalizing the innocent.

    11. By using sensors, it is been used to do simple web searches, and using the Brain-Computer typing machine we can map the brain waves when you think about moving left, right, forward, and backward. This can be used in a mind- controlled wheelchair for locomotion.

    12. It can be implemented on the wheelchair and the wheelchair can be moved through the mind control, it permits the people who cannot use the normal wheelchairs easily due to their disability. It can fill in as a voice to coma or paralytic patients.

    13. This can be availed to exchange information and then people can avail the message without being overheard. It is specifically used for military purposes or sting operations.

    14. This technology can be combined with consoles and used for mind gaming, robotics.


    • There is no way to neutralize this technology as we cannot stop someone from eavesdropping into our minds.

      A mind-controlled wheelchair is a mind-machine interfacing device. It uses thought as neural impulses to command the motorized wheelchairs movement. The first device was designed by Diwakar Vaish. The wheelchair is critical to patients suffering from locked-in syndrome. In this condition, the patient cannot move or communicate verbally but aware of everything happening in the surrounding with their eyes open. Such a wheelchair can also be used in case of muscular dystrophy, a disease that weakens the muscular dystrophy, a disease that weakens the musculoskeletal system and hampers locomotion. A mind-controlled wheelchair works using a brain-computer interface. An electroencephalogram is worn by the user. This detects neural impulses that reach the scalp and allows the micro-controller on board to detect the users thought process, interpret it, and control the wheel chairs locomotion.

      Figure 4: Mind controlled wheel chair

      Different types of sensors, for example, temperature sensors, sound sensors, and a series of distance sensors that identify uneven surfaces are embedded in the wheelchair. The chair is programmed to avoid surfaces such as stairs or steep inclines. It also has a safety switch in case of anger and the user can close his eyes quickly to trigger n emergency stop.


      • By invading the thoughts of ones mind we are breaching the privacy of an individual. So that the confidential data can be hacked by unauthorized persons and it can be more dangerous.

      • If it is developed or utilized by miscreants it can end up being risky.

      • This technology can extract through an individual an important, secure, and confidential information of an individual, state, or even a country.


    Mind reading technology is not science fiction, the couple of wearables can track brain activity. As of 2020, it is the first time there is been a brain-computer interface, and the first time you can theoretically control any device by focusing your thoughts on them. It is targeting entertainment and gaming that's where it can provide the technology to reach people. Wearing the headband, a video game with a rocket ship was introduced. When you focus harder the faster the rocket ship moved thus increasing the score. Then by relaxing the mind you can slow down the rocket. A light on the front of the headband turns red when your brain is intensely focused, yellow if in a relaxed state and blue if in a meditative state.

    Trials have moved forward with implantable devices in humans but it can be used for different purposes. Nissan unveiled Brain-to-Vehicle technology that would allow vehicles to interpret signals from the driver's brains. Elon Musk's clinical trial, for example, focuses on patients with complete paralysis due to an upper spinal cord injury. Another purpose served is by the implanted mechanism. It will allow the user to virtually control any device, like a smartphone or an electric vehicle. This is controlled by their mind, which would of great importance for patients with physical limitations. Several Australian mining companies have adopted a Smart Cap, a device that looks like a baseball cap but is lined with EEG electrodes on the interior rim to reduce the impact of fatigue on the safety and productivity of their staff. Changes in emotional states of employees on the production line, the military, and at the helm of high-speed trains can be identified by deploying brain-reading technology which is used by government- backed surveillance projects.


    My findings in mind reading I can read ones mind and tell whether the person is lying or not. We can identify whether a person is lying from the change in the voice, their bodily expressions may not match what they are saying out loud, or giving long pauses before they say something. With some people, they may show signs of sweating or trembling. The next method of lie detection is by noting the movement of the eye while speaking. In this, if the subject is right- handed and if ones eye moves towards left, then the subject is tending to be lying. If the subject is left-handed and if ones eye move right, then the subject is tending to be lying. I have done my study on mind reading and its various techniques and its applications and I can suggest to study about how the psychology and the computer techniques coordinate the mind reading in detail.


Mind reading is the ability to infer peoples mental state and use that to make sense and predict their behavior. In this, I have discussed what mind-reading technology has impacts in the present world. Science beyond technology explains how mind-reading computers work. The different techniques

used in mind reading are reviewed and the various advantages and applications are listed. The head pose estimation technique is analyzed thoroughly. My findings related to the latest updates are also recorded.


  1. Cherry, K. (2019, May 16). What Is Biofeedback and How Does It. Very Well Mind.

  2. Farnsworth, B. (17, July 2018). What is GSR (galvanic skin response) and how does it work? IMOTIONS.

  3. Handel, S. (2011, May 24). Classification of Emotions. The Emotion Machine.

  4. Khan, M. S. (2014). Mind Reading Computer. International Journal of Computer Science and Mobile Computing,.

  5. Mallick, S. (2016, SEPTEMBER 26). Head Pose Estimation using OpenCV and Dlib. Learn OpenCV.

  6. Shaswat J. Babhulgaonkar1, P. J. (2017). Mind Reading Computer Technology. International Journal of Scientific Research in Computer Science, Engineering and Information Technology.

  7. Thakur, S. (2015, january 10). Mind Reading Seminar and PPT with pdf report. Study Mafia.

  8. Wood, L. (2007, October 02). Computer to Read Minds. Live Science.

  9. Zainal1, S. M. (n.d.). The Future of Mind Reading Computer. 1University Of Canberra.

Leave a Reply