The Virtual Piano Action: Design and Implementation using Digital Image Processing

DOI : 10.17577/IJERTV9IS050506

Download Full-Text PDF Cite this Publication

Text Only Version

The Virtual Piano Action: Design and Implementation using Digital Image Processing

Yashwanth G1

UG Students1,

Department of Information Science and Technology Vidya Vikas Institute of Engineering & Technology Karnataka, India

Saifulla Khan2

UG Students2,

Department of Information Science and Technology Vidya Vikas Institute of Engineering & Technology Karnataka, India

T V Rahul Reddy3

UG Students5,

Department of Information Science and Technology Vidya Vikas Institute of Engineering & Technology Karnataka, India

Sukanya H A4

UG Students4,

Department of Information Science and Technology Vidya Vikas Institute of Engineering & Technology Karnataka, India

Varsha N5

Assistant Professor5,

Department of Information Science and Technology Vidya Vikas Institute of Engineering & Technology Karnataka, India

Abstract The piano is a practical piano design created by a keyboard with real-time engine keys The simulation system on the machines is introduced. Using this simulator, we created anew certain aspects of the grand piano sensation by combining math with numbers a movement of piano accompaniment facilitated in real-time in a human-on-the-loop imitation program. In this paper, the simulation of the release and capture of the hammer is used to present simulator construction. Software module design that controls the simulation models with kinematic transition problems are discussed, including a limited-state machine a driver that allows for the production of rigid body modes that can take on many forms the compression conditions respectively depend on the connection of the working time.


    Musical instruments were judged not only they cry, but also about what they feel like doing play. Although modern algorithms do he made synthetic instruments very capable almost the sounds of their acoustic partners, technology could not he has adequately responded to the artist's concerns differences in mood. In addition, on acoustic instruments there is a strong connection between the reaction of the metal and its impact the answer. In fact, sometimes more information is available is obtained by the behavior of the metal its feeling more than its sound. For example, to find the fastest possible repetition of the piano, The player receives the required minimum key return to re-make the jack under the hammer with feel rather than in anger. Reconstruction of a the relationship between mechanical behavior and Acoustic behavior in synthetic instruments greatly increasing the amount of information to come from tool above haptic channel ' and thus give artists greater visual control. At CCRMA we design and develop a virtual piano action-sim keyboard action (software) and haptic display device

    (Hardware) these combinations make it possible to imitate the hear about various keyboard accessories. The haptic The display device is a key made by a motor or other manipulandum which, under the control of the computer, makes it possible to communicate the physical objects through touch. Without a doubt, the synthesizer controllers a great application for haptic technology to display. In fact, we believe this is technology one day it will be an effective way of building the touch you need in marketing tools is mehendi design with architectural elements. Yes Yes, a beautiful piano arrangement action is its leading marketplace. The feeling of the harnessichord, piano, piano or something altogether a new one will be available for each button. Several robotics research labs are under construction haptic display technology, in particular labs at MIT, Northwestern, and North Carolina. Claude Cadoz 'Group, ACROE in Grenoble and develop a virtual haptic display device musical instruments [Cadoz 93]. Their model and design tools designed to work on transmiters, many parallel-use machines. By difference, our tools are based on the norm Modeling techniques and designed to use single processing platforms. Our testosterone dosage is viable real-time simulation of background and nonlinear programming systems, systems with memory and without memory 2 and systems with kinematic transition. In this paper, we will talk to the construction of models and their installation into the simulator, we pay special attention accommodation for changes in kinematic problems. Disposal programs (e.g., incoming which the organism can do and lose contact with each other) should be advanced members of this simulator repertoire. Some very interesting haptic cords in our environment come from changes in kinematic theory, fact and reality with keyboard actions. Examples include issue of

    chaos in which the plectrum is drawn harpsichord strings, and extra resistance during the escape from the piano action. In a very strong physical system, different pressure conditions can be taken from any following instructions, depending on how they are made the systems (perhaps the user) interact with. Barzel (Barzel 92] and others have spoken of the fulfillment of this of systems that divide by measurement of a sequence of standard partitioning equations. In our job, we accept their appointment and put together it with the Finite State Machine (FSM) simulator, which will allow for the sequence of events or 'country' to be replaced by a false order known before the imitation period. To create a suitable model to emulate with a haptic display that repeats every aspect of The complex behavior of the piano act will require be a step-by-step process. Step 2 to follow describes the design of the appropriate model Our simulator. Stage 3 presents (smart) rolling model of a rolling ball and its imitation algorithm. In Section 4, we construct the inequalities model that combines the action of the key lever and hammer and discuss its simulation algorithm. The installation of a virtual state machine manager manage to add the visible key to the model is discussed in section 5. Experimental results mentioned in

    6. Finally, summaries current and future work discussed in section 7.


    For all the different types of systems where the mission is models are presented, a standard reduction set differential equations (ODEs) (and constraints included) is the simplest. We have selected to support our simulator on the models shown as reduced ODEs. In doing so, we hope that we it will be led to investigate something more complex Issues involved with real-time simulation. The model itself is built as part- wise continuous ODE. Delay is allowed there time travel associated with changes in kinematic issues. The intervals between each discontinuities is controlled by a single set 'sub models', each of which is continuous ODE is designed to define a program in one its compressive conditions. A complete introduction to the topic it used to define these piecewise continuously For ODEs, see [Barzel 92] or [Gillespie 93]. Here, we briefly summarize. Sub models no it is divided into three sections to facilitate decision making by the simulator what a sub model should be control the behaviour at a given time. Three items used below: reading comprehension, and cursor work. Transit numbers are solved by a number in order to keep time variables at every time. The Readout equation is a expression of output (in our case the capacity to respond) to in terms of input. Pointer function tested once per servo cycle to indicate its existence time to switch to the next sub model.


    Here we describe a very simple system, a ball which bounces on a vertically moving paddle. It is linear and has nly two sub models: ball in the air, ball on the paddle. After introducing this system, we will claim that it is actually a good model of the piano action.

    Figure 1: The Bouncing Ball

    Figure 1 shows the two submodels which make up the bouncing ball: a) the ball is attached to the paddle through a spring, and b) the ball is flying vertically in the air, free from the paddle. For submodel a), the equation of motion, readout equation, and indicator function are, respectively:

    q = (q-d)-g (1)

    F = mpd + k(q – d) + mpg (2)

    k{q-d)<0 (3)

    The indicator function (3) evaluates to TRUE when the ball/paddle interaction force is tensile, signalling the end of applicability of model a). For the ball in air submodel, the equation of motion, readout equation, and indicator function are:

    Q = -9 (4)

    = mpd + mpg (5) q-d<0 (6)

    The indicator function (6) evaluates to TRUE when there is interference between the ball and paddle.

    A linear differential equation such as we have here is always expressible in state space form as

    x = Ax + bu (7)

    This differential equation can be converted to a difference equation

    x+i = $ x n + r u n (8)

    suitable for simulation on a digital computer. The discrete equivalent matrices * and T are given in terms of the continuous matrices A and b and the time step T by

    Several common computer algorithms with good numerical properties are available to do the conversion. The simulation algorithm then simply involves a matrix multiply to advance the simulation by one time step T.

    Because these equations are second order linear ODEs, analytical solutions exist. There is in fact no need to solve the differential equations numerically for this simple model. The state can be expressed as a function of the input and time. The force output is computed as a function of the motion input using the readout equation of the applicable submodel until such time that the indicator function evaluates to a

    negative number. The simulator then exchanges submodels, using the final conditions of the last as the initial conditions of the next submodel. Note that in this case, the 'next' submodel is just the other submodel. This rather simplistic model has created a very convincing virtual bouncing ball when implemented with the haptic display device. Interaction between the ball and user through a motorized key (in this case to be viewed as a paddle handle) includes all the properly timed power exchanges to suggest manipulation of a bouncing object. In summary, we have implemented a unilateral constraint (a gross non-linearity- a contact capable of supporting compressive but not tensile forces) by combining two linear submodels with some management routines for exchanging them in and out of the simulator.


    Figure 2 shows a simplified schematic diagram of the piano action. This model has only two bodies, the key and hammer. The letoff function of the whippen and jack are not modeled. The hammer and key are coupled with a unilateral constraint. A spring accounts for compliance in the action, most of which is due to softness of the hammer knuckle. The other submodel in which the hammer flies free of the key is not shown; it can be surmised. This model will behave like a piano action in which the regulation button is set too high, inactivating the letoff and repetition functions.


    k(hs + l3q) < 0 (13)

    See [Topper 87] for an explicit derivation of the equations of motion for this model. Note that the function of the action is very much like that of the ball and paddle in the model outlined above: to throw the hammer toward the string and then catch it again. The simple addition of a ceiling for the ball to bounce off of (a virtual string), and an inversion of the paddle's motion to reflect the fact that the hammer is actuated from the opposite side of the key fulcrum -28 will turn the bouncing ball model into a good first approximation of the piano action. After further assuming that inertia forces dominate over gravity forces in the coupled hammer-key model, appropriate mass and spring values for an approximating ball and paddle model can be deduced by comparing equations (1), (2), (3) with (11), (12), (13)


    A useful addition to our model is a virtual keybed. The key dip differs between a harpsichord, fortepiano and piano, and this is an aspect we would like to include in our keyboard simulator. The method outlined so far only accommodates models in which the sequence of submodels is known: going back and forth between two submodels. Depending on the manner in which the key is depressed, either the hammer could fly free or the key could meet the keybed first. The other change in condition may not follow, again depending on how the key was depressed. 3 In order to manage the sequencing

    q l3


    Hammer k

    Hammer k



    mh Key

    Hammer k l5 s




    through various submodels, we employ a finite state machine simulator. A finite state machine is a dynamical system capable of taking on a finite number of states in a possibly



    complex sequence of transitions from a particular state to

    mk certain others of the set of possible states. A finite state model

    l2 l1

    Figure 2: Simpli ed Piano Action

    The non-linear equations of motion, readout equations and indicator functions are not pre- sented here. The simulation is realized in this case with an ODE solver instead of the di erence equa- tion (8). A Runge-Kutta or other numerical ODE solution routine is responsible for advancing the state

    is fully specified by its state transition graph, one of which is shown in Figure 3. This finite state model is for the simplified piano action with a virtual keybed. Coupling between bodies is noted in Figure 3 by spring icons. Only certain transitions are allowed. As

    Hammer ke y

    by each time step using the previous state and the input (key

    motion). As before, the force output is given by the readout equation, and the indicator function is tested each time step. Because the angles through which both the key and the hammer move are rather small, several linearizing assumptions can be made in the con- struction of a piano action model. Speci cally, we shall assume that all interaction

    Hammer Key Keybed


    Hammer Key Keybed

    Hammer Key Keybed

    forces and grav- ity forces act perpendicular to the bodies to which they are applied as seen in Figure 2. Also, the force of the key on the hammer is applied at a xed position on the hammer determined by l3. Given these assumptions, the equations of mo- tion, readout functions, and indicator functions respectively are as follows

    Mi-, mhUg

    q= (hs + hq) j (11)

    F= y-s-{hs + l3q) + – (12)

    M '1 »1

    Figure 3: State Transition Graph for the Piano Action

    Sociated with each transition path is an indicator function which, upon evaluation to a number less than zero, indicates that it is time to transition to the model pointed to by that path.


    We have conducted several introductory experiments using this apparatus. Subjects have made side-by-side key-press comparisons between the above virtual action and a physical action with its regulation button removed with promising

    results. We have begun to address real-time simulation problems such as extra energy introduced into the simulation by model transition timing errors with compensating additions to the simulation algorithm.


We have presented a modeling and simulation algorithm which accommodates dynamical systems with changing kinematic constraints and provides for the re-creation of their mechanical impedance by simulation and haptic display. The method involves modeling the system in each of its constraint conditions. Readout equations expressing the orce output in terms of the state variables as well as indicator functions which signal the end of applicability accompany each model. The sequence of models can be considered a piece-wise continuous ODE. If the model is linear, it can be discretized and then simulated with a difference equation. Otherwise, an ODE solver is used. The method is also useful for systems in which the sequence of constraint conditions is not known ahead of time with the addition of a submodel manager based on a finite state machine driver.

A model of a bouncing ball and a simplified piano action were presented.


  1. Cadoz 93] C. Cadoz, A. Luciani, J-L. Florens. CORDIS-ANIMA: A modeling and simula- tion system for sound and image synthesis{ the general formalism, Computer Music Jour- nal, Vol 17, No. 1, Spring 1993, pp.19-29.

  2. [Topper 87] T. Topper, B. Wills. The computer simulation of piano mechanisms Interna- tional Journal of Modelling and Simulation, Vol. 7, No. 4, 1987.

  3. ARISTIDOU, A., AND LASENBY, J. 2010. Motion capture with constrained inverse kinematics for real-time hand tracking. In International Symposium on Communications, Control and Signal Processing, IEEE, 15.

  4. [Barzel 92] R. Barzel. Physically-Based Modeling for Computer Graphics, Academic Press, Boston, 1992.

  5. [Gillespie 93] B. Gillespie, M. Cutkosky. Interac- tive dynamics with haptic display In Proceed- ings of the 1993 ASME Winter Annual Meet- ing, New Orleans, pp. 65-72

  6. BROERSEN, A., AND NIJHOLT, A. 2002. Developing a virtual piano playing environment. In IEEE International conference on Advanced Learning Technologies (ICALT 2002), 278282.

  7. CAMPBELL, L. W., AND BOBICK, A. E. 1995. Recognition of human body motion using phase space constraints. In Computer Vision, 1995. Proceedings., Fifth International Conference on, IEEE, 624630.

  8. CHOW, J., FENG, H., AMOR, R., AND WU¨ NSCHE, B. C. 2013.

  9. Music education using augmented reality with a head mounted display. In Proceedings of the Fourteenth Australasian User In- terface Conference-Volume 139, Australian Computer Society, Inc., 7379.

  10. COMANICIU, D., AND MEER, P. 2002. Mean shift: A robust approach toward feature space analysis. IEEE Trans. PAMI 24, 5, 603 619.

  11. DIRKSE, S. 2009. A survey of the development of sight-reading skills in instructional piano methods for average-age beginners and a sample primer-level sight-reading curriculum. University of South Carolina.

  12. FISCHLER, M. A., AND BOLLES, R. C. 1981. Random sample

  13. consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM 24, 6, 381395.

  14. GILLIAN, N., AND PARADISO, J. A. 2012. Digito: A fine-grain gestural controlled virtual musical instrument. In Proc. NIME, vol. 2012.

  15. GIRSHICK, R., SHOTTON, J., KOHLI, P., CRIMINISI, A., AND FITZGIBBON, A. 2011. Efficient regression of general-activity human

    poses from depth images. In IEEE International Confer-MITCHELL, T. J., MADGWICK, S., AND HEAP, I. 2012. Musical interaction with hand posture and orientation: A toolbox of gestural control mechanisms.

  16. MODLER, P., AND MYATT, T. 2008. Video based recognition of hand gestures by neural networks for the control of sound and music. In Proceedings of the International Conference on New Interfaces for Musical Expression (NIME), Cite seer, 57. NYMOEN, K., SKOGSTAD, S. A. V. D., AND JENSENIUS, A. R. 2011. Sound saber- a motion capture instrument. OIKONOMIDIS, I., KYRIAZIS, N., ARGYROS, A., ET AL. 2011.

  17. Full do tracking of a hand interacting with an object by modeling occlusions and physical constraints. In IEEE International Conference on Computer Vision, IEEE, 20882095.

  18. PALSHIKAR, G., ET AL. 2009. Simple algorithms for peak detection in time-series. In Proc. 1st Int. Conf. Advanced Data Analysis, Business Analytics and Intelligence.

  19. REN, Z., MEHRA, R., COPOSKY, J., AND LIN, M. 2012. Designing virtual instruments with touch-enabled interface. In CHI12 Extended Abstracts on Human Factors in Computing Systems, ACM, 433436.

  20. ROGERS, K., RO¨ HLIG, A., WEING, M., GUGENHEIMER, J., KO¨ NINGS, B., KLEPSCH, M., SCHAUB, F., RUKZIO, E.,SEUFERT, T., AND WEBER, M. 2014. Piano: Faster piano learning with interactive projection. In Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces, ACM, 149158.

  21. HAMMER, J. H., AND BEYERER, J. 2013. Robust hand tracking in real-time using a single head-mounted rgb camera. In Inter- national Conference on Human-Computer Interaction, Springer, 252261.

  22. HAN, J., AND GOLD, N. 2014. Lessons learned in exploring the leap motion tm sensor for gesture-based instrument design. In Proceedings of the International Conference on New Interfaces for Musical Expression, 371374.

  23. HAN, L., WU, X., LIANG, W., HOU, G., AND JIA, Y. 2010. Discriminative human action recognition in the learned hierarchical manifold space. Image and Vision Computing 28, 5, 836849.

  24. HEAVERS, M. Vimeo video: Leap motion air piano. https://vimeo. com/67143314.

  25. LI, C., AND KITANI, K. M. 2013. Pixel-level hand detection in ego- centric videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 35703577.

  26. LIANG, H., YUAN, J., AND THALMANN, D. 2015. Resolving ambiguous hand pose predictions by exploiting part correlations. IEEE Trans. Circuits and Systems for Video Technology 25, 7, 11251139.

  27. LIN, C.-C., AND LIU, D. S.-M. 2006. An intelligent virtual piano tutor. In Proceedings of the 2006 ACM international conference on Virtual reality continuum and its applications, ACM, 353 356.

  28. LV, F., AND NEVATIA, R. 2006. Recognition and segmentation of 3-d human action using hmm and multi-class ad boost. In Computer VisionECCV 2006. Springer, 359372.

  29. MELAX, S., KESELMAN, L., AND ORSTEN, S. 2013. Dynamics based 3d skeletal hand tracking. In Proceedings of Graphics Interface 2013, 6370.

  30. M. M., AND RAMANAN, D. 2014. 3d hand pose detection in egocentric rgb-d images. In ECCV Workshop on Consumer Depth Cameras for Computer Vision, Springer, 356371.

  31. SUN, X., WEI, Y., LIANG, S., TANG, X., AND SUN, J. 2015. Cascaded hand pose regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 824832.

  32. TAGLIASACCHI, A., SCHRO¨ DER, M., TKACH, A., BOUAZIZ, S., BOTSCH, M., AND PAULY, M. 2015. Robust articulated-icp for real- time hand tracking. Computer Graphics Forum 34, 5, 101114.TANG, D., YU, T.-H., AND KIM, T.-K. 2013. Real-time articulated hand pose estimation using semi-supervised transductive regression forests. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 32243231.

  33. XU, C., AND CHENG, L. 2013. Efficient hand pose estimation from a single depth image. In IEEE International Conference on Computer Vision, IEEE, 34563462.

  34. YEH, C.-H., TSENG, W.-Y., BAI, J.-C., YEH, R.-N., WANG,S.-C., AND SUNG, P.-Y. 2010. Virtual piano design via single- view video

    based on MultiFinder actions recognition. In 2010 3rd International Conference on Human-Centric Computing, 15. YI, X., YU, C., ZHANG, M., GAO, S., SUN, K., AND SHI, Y. 2015. Ask: Enabling

    ten-finger freehand typing in air based on 3d hand tracking data. In Annual ACM Symposium on User In- terrace Software and Technology.

  35. ZHU, L. L., CHEN, Y., LU, Y., LIN, C., AND YUILLE, A. 2008.Max margin and/or graph learning for parsing the human body. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, IEEE, 18.

Leave a Reply