Applied Science in Mind Reading

—Modern technology has led to many new inventions: one such is the innovation of mind-reading computers. The mind is said to be an abstract entity, lack of impairment in the theory of mind is thought to be the primary inhibitor of emotion understanding, and social intelligence in individuals with autism. The goal of building the mind-reading machines is to enable computer technologies to understand and react to people’s emotions and mental states. This enables recognition and reading of complex mental states allowing brain-computer interface interaction. In this paper, I have presented the science behind the mind-reading computers. It is presented from the concept of mind-reading to the techniques used for the same.


I. INTRODUCTION
Over the last few decades, there has been a significantly increased interest in improving the quality of interactions between users and computer machines. Researches have been examining and analyzing human beings. Drawing motivation from brain research, PC vision, and AI, from the outset, the group in the Computer Laboratory at the University of Cambridge has created mind understanding PCs. People express their mental states, emotions, thoughts, and desires all the time through facial expressions, vocal nuances, and gestures. This is true even when they are interacting with a machine. The understanding of human thoughts is one of the most complex tasks as no one would know what a person will be doing next by analyzing his present thoughts.
II. ABOUT MIND READING Mind reading may be defined as inferring human thoughts, emotions, and desires, etc. the simplest method in mind reading is understanding the facial expressions and gestures [6]. Furtherly, studies have been going on for the study of body postures to know more about the mental state of the person. The theory of mind-reading can be said to be the ability to attribute mental states to others from their behavior and use that knowledge to guide our actions and predict those of others. The machine works by using digital video cameras that analyze a person's facial expression in real-time and infer a person's underlying mental state such as thinking, confused, bored, agreeing, disagreeing, happy, sad, and angry. Earlier information on how specific mental states are communicated by the outward appearances and head motions are put away in the database of the machine and afterward coordinated to give the psychological state [4]. The mind-reading system is the coordination of the human psychology field and affective computer techniques.

A. Biofeedback
It is a process that enables an individual to learn how to change physiological activity to improve health and performance. Biofeedback learns how to recognize the physical signs and symptoms of stress and anxiety, such as heart rate, body temperature, and muscle tension [1]. There are different types of biofeedback and they include breathing, heart rate, galvanic skin response, blood pressure, skin temperature, brain waves, and muscle tension.

B. Stimulus and Response
When a subject is given a certain stimulus, the brain will automatically produce a measurable response so there's no need to train the subject to manipulate specific brain waves.

IV. FUTURISTIC HEADBAND-HOW DOES IT WORK?
Futuristic Headband contains a headband which sends light emission to the tissues of the head where it is absorbed by active, blood-filled tissues. It measures the volume and oxygen level of the blood around the brain of the subject. This is done by the technology called functional near-infrared spectroscopy (fNIRS) [7]. The headband then measures how much light was absorbed. Then the results are compared to an MRI or it can be gathered with lightweight, non-invasive equipment. Wearing the fNIRS sensor, subjects were asked to count the number of squares on a rotating onscreen cube and to perform other tasks [8]. The specific region of the mind where the bloodstream change happens ought to give signs of the cerebrum's metabolic changes and by augmentation remaining task at hand, which could be an intermediary for feelings like dissatisfaction. Estimating mental remaining task at hand, dissatisfaction and interruption is normally restricted to subjectively watching PC clients or to regulating overviews after the culmination of an undertaking, conceivably missing significant knowledge into the client's evolving encounters.

A. Facial Affect Detection
It is done using a hidden Markov Model, Neural Network processing, or active appearance model. [9] The image of a person is matched bit by bit with the images stored in a database for image detection.

B. Emotional Classification
One may distinguish or contrast one emotion from another so that emotions can be characterized by an emotional basis in a group. According to Paul Ekman emotions such as anger, disgust, sadness, surprise, fear, happiness can be classified and emotions such as amusement, contempt, contentment, excitement, guilt, pride, relief, satisfaction, shame cannot be classified. Later Robert Plutchik created a wheel of emotions as it demonstrated how different emotions can blend into one another and create emotions [3]. Later by 2001 Parrot identified over 100+ emotions and conceptualized them as a tree-structured list.

C. Facial Electromyography
It is used to measure the electrical activity of the muscles. We get to the PC cursor as per distinctive facial muscle action designs. It generally helps people with disabilities for controlling the movement of the cursor on a computer screen.

D. Galvanic Skin Response
It is the measure of skin conductivity, which is dependent on how moist the skin is. Skin conductance is not under conscious control. Instead, it is modulated autonomously by sympathetic activity which drives aspects of human behavior, as well as cognitive and emotional states [2]. The conductance in skin indicates the autonomous emotional regulation. It can be used to validate self-reports, surveys, or interviews of participants within a study.

E. Blood Volume Pulse
It is an optical, non-invasive sensor that measures cardiovascular dynamics. It detects changes in the arterial translucency. It is measured by a process called photoplethysmography. As a result, a graph indicating blood flow through the extremities will be depicted.

F. Head Pose Estimation
It is an important visual cue in many scenarios such as social event analysis, human-robot interaction, and driverassistance systems. This estimation determines the interaction between people. It extracts the visual focus of attention.

VI. HEAD POSE ESTIMATION
Facial recognition is a thriving application of deep learning and combining with pose estimation will result in a powerful match. We must know how the head is tilted concerning a camera. In the application of the driver assistance system, a camera will be examining the driver's face and it can use head pose estimation to see if the driver is paying attention to the road. Another application is where it can use head pose based gestures to control a hands-free application/game. The position of an object should be subject to its orientation and position to the camera. You can change the orientation and position by either moving the object concerning the camera or vice versa.
The pose estimation problem described here is often referred to as perspective-n-point problem or PNP in computer vision jargon. In this, the goal is to find the pose of an object when we have a calibrated camera. Also, we know the locations of n 3D points on the object and 2D projections of the same image. To calculate pose estimation of an object in an image you need the following information [5]:-• 2D coordinates of a few points: In case of a face you could choose the corners of the eyes, the tip of the nose, corners of the mouth, etc. Dlib's facial landmark detector provides us with many points to choose from, like the tip of the nose, the chin, the corresponding corners of the eyes, the left and right corners of the mouth. • 3D locations of the same point: You also need the 3D location of the 2D feature points. Ideally, we need a 3D model of the person in the photo to get the 3D locations but in practice, the generic 3D model will suffice. So, a 3D location's few points are taken from the arbitrary reference frame. • Intrinsic parameters of the camera: In this, the camera is assumed to be calibrated. That is the focal length of the camera, the optical center in the image and the radial distortion parameters are known. If the camera is not calibrated, we can approximate the optical center by the center of the image, approximate the focal length by the width of the image in pixels by assuming that radial distortion does not exist.
Head pose estimation use expression-invariant feature to estimate pitch (500), yaw (500), and roll (300) for example estimation of head yaw using the ratio of a left point to the right point of eye widths and estimation of the head roll using the angle between the two inner eye corners. Motion, shape, and color descriptors are some feature points which are identified as facial actions. For a real-time video system motion and shape-based analysis are particularly suitable. Color-based analysis is computationally efficient especially when combined with feature localization.
For lip shape tracking that identifies for example lip corner pull for smile and lip pucker the polar distance between each of the two mouth corners and the anchor point is computed. Change from the initial frame is observed and the average percentage change is used to discern mouth displays.
Color descriptors can tell that whether the mouth is closed or open by differentiating the teeth and aperture by different colors. For example, teeth are represented by green color, and aperture is represented by red color.

VIII. MIND CONTROLLED WHEELCHAIR IN DETAIL
A mind-controlled wheelchair is a mind-machine interfacing device. It uses thought as neural impulses to command the motorized wheelchair's movement. The first device was designed by Diwakar Vaish. The wheelchair is critical to patients suffering from locked-in syndrome. In this condition, the patient cannot move or communicate verbally but aware of everything happening in the surrounding with their eyes open. Such a wheelchair can also be used in case of muscular dystrophy, a disease that weakens the muscular dystrophy, a disease that weakens the musculoskeletal system and hampers locomotion. A mind-controlled wheelchair works using a brain-computer interface. An electroencephalogram is worn by the user. This detects neural impulses that reach the scalp and allows the micro-controller on board to detect the user's thought process, interpret it, and control the wheel chair's locomotion.
Different types of sensors, for example, temperature sensors, sound sensors, and a series of distance sensors that identify uneven surfaces are embedded in the wheelchair. The chair is programmed to avoid surfaces such as stairs or steep inclines. It also has a safety switch in case of anger and the user can close his eyes quickly to trigger an emergency stop.

IX. DISADVANTAGES AND PROBLEMS
• By invading the thoughts of one's mind we are breaching the privacy of an individual. So that the confidential data can be hacked by unauthorized persons and it can be more dangerous. • If it is developed or utilized by miscreants it can end up being risky. • This technology can extract through an individual an important, secure, and confidential information of an individual, state, or even a country.
• There is no way to neutralize this technology as we cannot stop someone from eavesdropping into our minds.
X. ADVANCEMENTS Mind reading technology is not science fiction, the couple of wearables can track brain activity. As of 2020, it is the first time there is been a brain-computer interface, and the first time you can theoretically control any device by focusing your thoughts on them. It is targeting entertainment and gaming that's where it can provide the technology to reach people. Wearing the headband, a video game with a rocket ship was introduced. When you focus harder the faster the rocket ship moved thus increasing the score. Then by relaxing the mind you can slow down the rocket. A light on the front of the headband turns red when your brain is intensely focused, yellow if in a relaxed state and blue if in a meditative state.
Trials have moved forward with implantable devices in humans but it can be used for different purposes. Nissan unveiled Brain-to-Vehicle technology that would allow vehicles to interpret signals from the driver's brains. Elon Musk's clinical trial, for example, focuses on patients with complete paralysis due to an upper spinal cord injury. Another purpose served is by the implanted mechanism. It will allow the user to virtually control any device, like a smartphone or an electric vehicle. This is controlled by their mind, which would of great importance for patients with physical limitations. Several Australian mining companies have adopted a Smart Cap, a device that looks like a baseball cap but is lined with EEG electrodes on the interior rim to reduce the impact of fatigue on the safety and productivity of their staff. Changes in emotional states of employees on the production line, the military, and at the helm of high-speed trains can be identified by deploying brain-reading technology which is used by governmentbacked surveillance projects.

XI. FINDINGS AND SUGGESTIONS
My findings in mind reading I can read one's mind and tell whether the person is lying or not. We can identify whether a person is lying from the change in the voice, their bodily expressions may not match what they are saying out loud, or giving long pauses before they say something. With some people, they may show signs of sweating or trembling. The next method of lie detection is by noting the movement of the eye while speaking. In this, if the subject is righthanded and if one's eye moves towards left, then the subject is tending to be lying. If the subject is left-handed and if one's eye move right, then the subject is tending to be lying. I have done my study on mind reading and its various techniques and its applications and I can suggest to study about how the psychology and the computer techniques coordinate the mind reading in detail.

XII. CONCLUSION
Mind reading is the ability to infer people's mental state and use that to make sense and predict their behavior. In this, I have discussed what mind-reading technology has impacts in the present world. Science beyond technology explains how mind-reading computers work. The different techniques