DOI : 10.17577/IJERTV14IS100008
- Open Access
- Authors : Kwang B. Lee, Jamie W. Lee
- Paper ID : IJERTV14IS100008
- Volume & Issue : Volume 14, Issue 10 (October 2025)
- Published (First Online): 09-10-2025
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
Developing Virtual and Augmented Reality UI Toolkit using Model-Based Parameter Analysis
Kwang B. Lee
Computing and Security Slippery Rock University of Pennsylvania
Slippery Rock, Pennsylvania, USA
Jamie W. Lee
US Navy USA
Abstract The growing demand for virtual reality (VR) and augmented reality (AR) apps demonstrates how rapidly this technology is growing. VR and AR apps provide immersive, interactive experiences that can revolutionize a wide range of fields, including education, entertainment, healthcare, and design. In order to design and develop VR apps and interfaces, it is important to analyze the controllable and uncontrollable parameters of the VR system. This helps developers identify areas for design and interaction improvement and provides more intuitive and user-centered experiences. This paper systematically analyzes design parameters that affect user performance and usability when interacting with UI layouts using design engineering techniques. While previous studies have evaluated specific types of human performance models, this study explores parameters required to build model-based UI layouts, such as visual aesthetics and physical ergonomics. By identifying these key parameters, we hope to lay the foundation for further research to improve the quality of immersive experiences for users with diverse abilities.
Keywords virtual reality, augmented reality, user interfaces, human computer interaction
-
INTRODUCTION
VR technology is widely applied in various fields such as medical, military, aviation, and retail. The main reason for using VR is that it offers a risk-free learning environment. As VR usage grows, research on its impact on learning outcomes and experiences is also steadily increasing. The past several decades has welcomed improvements in virtual and augmented reality (VR, AR) systems, providing users with an immersive and interactive digital experience. While these systems have been integrated into devices such as mobile devices, projectors, and PCs, the integration into head-mounted displays (HMDs) has facilitated a wider range in movement and a more comfortable user experience. These devices can provide a completely immersive simulated experience as with VR, or a projection of computer-generated objects onto the real world as with AR. The multiple sensory immersion offered by both devices enables users to bridge the divide between the physical and virtual worlds and have experiences much different from reality. For example, the Hololens 2 is an optical see-through HMD device shown in Figure 1.1 that provides an AR experience by projecting holograms onto the users real-world environment. Fig. 2 (a) displays an image taken with the camera attached to the Hololens 2 of a sample UI consisting of various widgets and buttons which the user can physically interact with. The Hololens 2 also utilizes spatial mapping, which provides a
detailed representation of real-world surfaces in the environment around the device as shown in Fig. 2 (b).
Due to the importance of providing a comfortable and efficient VR/AR experience, there has been increased focus in the study of physical and visual ergonomics for designing VR/AR games and applications. For Human-Computer Interaction (HCI) research, there are two major challenges associated with these systems. First, designers cannot be aware of the capabilities of every user for the application they are designing. This limitation is conducive to design exclusion, in which the designer unintentionally excludes certain users or user groups due to implicit biases or assumptions they form about the user. Design exclusion, coupled with limited research in understanding the usability of VR/AR systems for individuals of various (dis)abilities, is detrimental to the goal of providing an enriching and inclusive experience for all users.
Fig. 1. The Hololens 2 optical see-through HMD
Fig. 2. (a) A sample UI designed for the Hololens 2 using Unity, (b) the spatial mapping feature of the device.
A second challenge for VR and AR systems is the designing of effective 3D user interface (UI) layouts. The optimality of these layouts is often variable to human perception, psychology, and preference; an optimal design for one user will usually not be optimal for all other users. Such noisy behavior from users and variability between user preferences makes the process of UI optimization non-trivial.
To mitigate designer bias and facilitate the design, creation, and exploration of inclusively immersive user interfaces (UIs), this research aims to systematically investigate the controllable and uncontrollable design parameters that dominate user performance and comfort when interacting with 3D UI layouts via a model-based approach. This investigation will provide a framework for the development of a UI design toolkit that takes a model-based approach to parameterize the various perceptual, cognitive, and physical factors that affect user performance and comfort when interacting with 3D UI layouts.
-
RELATED WORKS
VR application design differs significantly from traditional application design due to the immersive and interactive nature of the medium. VR users interact within a three-dimensional space using natural gestures, head movements, controllers, or even eye tracking. The entire virtual environment becomes the interface, and designers need to consider how users will navigate and interact with objects within this 360-degree space. Further the primary goal of VR is to create a strong sense of immersion and presence, making users feel truly present within the virtual environment. This requires realistic graphics, spatial audio, and haptic feedback to create a convincing and emotionally engaging experience. Further, designing for user comfort is paramount to prevent motion sickness and disorientation. Smooth transitions, minimizing jarring movements, and providing comfortable hardware are crucial. Finally, VR applications demand high performance to maintain smooth frame rates and minimize latency, as any lag can break immersion and cause discomfort. This requires efficient use of resources and optimization techniques specific to VR development. For these reasons, finding design parameters for virtual reality is a fundamental process for developing new VR applications. Well-defined parameters like realistic graphics, spatial audio, and haptic feedback can increase the sense of immersion and presence, making the virtual world feel more real. In essence, defining design parameters in VR is about creating a successful and impactful experience by balancing technological possibilities with user needs and comfort.
Model-based approaches have been widely used to optimize UIs and improve designs towards specific objectives. Unlike heuristic methods, this approach uses design knowledge in the form of user simulations, models, and/or heuristics as an objective function to model how users interact with and perceive such layouts. Todi et al. [2] adapted this method to develop Sketchplore, an interactive layout sketching tool with a real-time layout optimizer to generate usable and aesthetic layouts for conventional 2D interfaces. Their design tool uses predictive models to address the aesthetic and sensorimotor performance measures of generated layouts, such as visual clutter and search, grid quality, color harmony, and target acquisition, to define a multi-objective function. Multi-threaded optimization is then used to explore and exploit the design space.
An alternative approach is conducted by Mott et. al. [3] inthe form of semi-structured interviews with individuals with limited mobility revealed that the abilities of many of their participants did not match the assumptions embedded in the current VR design. The researchers found that many participants struggled with one or multiple of seven VR accessibility barriers, including manipulating dual motion controllers, putting on and
taking off the VR HMDs, and setting up the VR system. These barriers often deter users from engaging with such devices and emphasize the need for designing VR/AR systems that are accessible to all people. Blandford [4] address the principles for designing, conducting, and reporting on such qualitative studies for the purpose of understanding current needs and practices and evaluating the effects of new technologies in practice. We see these principles reflected in many fields across HCI; for example, Dias et. al. [5] interviews patients with Parkinsons Disease (PD), physicians, and software/game developers to identify the most significant game-design factors in designing assistive HCI serious games for PD patients. For example, SUPPLE [6] is an ability-based system that generates different renditions in response to different user usage patterns. The system automatically constructs UIs using an optimization process that searches the design space for an interface that minimizes the users movement time. The model for movement time is created by prompting the user to complete a series of clicking, pointing, dragging, and list selection tasks. Through this approach, SUPPLE generates UIs customized to a users abilities which enables more efficient and accessible mouse interactions.
Given the difficulty in extracting data in-situ from actual users or generating realistic data from proxy users, we instead integrate techniques adapted from design engineering, a methodology used in engineering to design products and systems that is often useful for systems which are complex and costly to validate. Kristensson et. al. [7] conducts envelope analysis and studies theoretical performance envelopes of a context-aware sentence retrieval system. By extracting parameters from the functional description of the system and simulating its potential performance, they are able to identify potential keystroke savings as a function of the parameters of the subsystems, revealing additional insight in designing for augmentative and alternative communication technologies. After parameterization of a model, designers often aim to find the most optimal settings of their controllable parameters to maximize efficiency in terms of their design objectives. However, layout optimization is a complex task, especially when the task encompasses both usability as well as aesthetic qualities.
-
BACKGROUND FOR PARAMETER ANALYSIS
Virtual reality (VR) experiences are shaped by a complex interplay of factors, some of which developers can control, while others are inherent to the technology or user psychology. VR parameters encompass a range of factors that influence the user experience and can be analyzed to optimize VR systems and applications. Analyzing these parameters is crucial for creating effective and engaging VR applications, whether for entertainment, training, or other purposes. Further, these parameters can be broadly categorized into technical specifications, user experience factors, and physiological responses. Analyzing these parameters helps in understanding the impact of VR on users, improving system design, and developing effective VR applications. Generally, the following parameters can be considered when designing a new VR application.
-
Environmental factors users interact with a 3D space which includes light, shadow, visual, audio, scale, scopes, and weather conditions.
-
Hardware constraints the inherent constraints of VR hardware, including display quality, processing power, battery life, and tracking accuracy.
-
User interface (UI) and user experience (UX) design VR creates a strong sense of immersion and presence, which requires clarity, intuitiveness, voice commands, and information presentation interfaces.
-
User movement and action VR controller requires setting options, hand tracking, distance, and a boundary system, which recognizes user movements and actions.
-
Psychological factors – VR system requires adjusting the experience based on adaptation, personalization, and cognitive management in VR environments.
Smooth transitions are essential to maintaining immersion in virtual reality. Abrupt transitions between scenes can be confusing or uncomfortable, and designing smooth transitions helps keep users engaged. For example, maintaining similar lighting or color schemes can help users adapt more quickly. Testing your transitions with real users can provide valuable feedback for improvement. In the next part, among the above parameters, we review elements that can be utilized now when creating actual UIs or applications using the model-based method within two major categories, such as uncontrollable parameters and controllable parameters.
-
-
CONSIDERING PARAMETERS FOR VR USER INTERFACES
This part of the paper identifies the relevant controllable and uncontrollable parameters that dominate user performance and comfort when interacting with 3D user interfaces. Due to the difficulty in extracting data in-situ from actual users or generating realistic data from proxy users, we take a model- based approach. This approach involves two steps: (1) identification and examination of pertinent models of human performance, and (2) determination of the optimal settings of controllable parameters using these models. A model-based approach offers potential for cost and time-effective evaluation of user performance without the need for intrusive measures. This approach has been used previously in UI development; for example, SPRWeb [18] is a tool that recolors websites to preserve subjective responses and improves color differentiability to enable users with color vision deficiency (CVD) to have similar online experiences as non-CVD users. Flatla et. al. [8] use models of subjective responses from external studies and develop a constraint optimization technique that seeks to minimize a cost function computed by a weighted sum of four individual costs: perceptual naturalness, perceptual differentiability, subjective- response naturalness, and subjective-response differentiability. Their evaluation demonstrated that SPRWeb outperformed the state-of-the-art Kuhn recolorer in choosing replacement colors for recoloring websites. Sketchplorer [9] is another example of a model-based approach; the sketching tool uses a real-time layout optimiser that uses predictive models of sensorimotor performance and perception to steer the designer toward more usable and aesthetic layout designs.
-
Uncontrollable Parameters
We first identify and describe relevant parameters which cannot be directly set by the designer, also known as uncontrollable
parameters. These parameters are relevant in the UI optimization process and can be used to construct and mitigate the occurrence of a potential worst case" scenario. We adapt quantitative models of these uncontrollable parameters from various studies to analyses the effect of each parameter on user performance and comfort when interacting with UIs designed for AR systems.
-
Physical Ergonomics: User comfort and ergonomics is an important consideration for HCI for the purpose of improving the user experience when interacting with VR/AR systems. Despite ongoing research, there are still challenges with evaluating VR/AR ergonomics; current methods often involve interviews and/or questionnaires such as those used by Mott et. al. [41] to evaluate the accessibility of VR systems for persons of limted mobility. In most scenarios, designers will not be able to obtain feedback, if any, from enough individuals to adequately represent all potential target users. Thus, researchers have attempted to find methods of quantitatively modelling and predicting user ergonomics. In the following subsections, we describe three methods of quantitatively analyzing physical ergonomics: consumed endurance, biomechanical simulation, and Rapid Upper Limb Assessment (RULA).
-
Consumed Endurance: VR/AR devices commonly use arm and hand gestures to enable communication between the user and system. However, prolonged use of the arms and upper body for mid-air gestures often leads to upper arm fatigue, a phenomenon commonly known as the gorilla- arm effect. Hincapié-Ramos et. al. [10] develops a metric to quantify the severity of this effect, Consumed Endurance (CE), which is derived from the biomechanical structure of the upper arm. Although multiple body parts are involved in such mid-air arm interactions, Hincapié-Ramos et. al. focus on the shoulder joint since it largely dominates the forces required for moving the arm. Therefore, this perspective of CE considers endurance of the shoulder in terms of torque as a ratio to the interaction time and uses shoulder torque as an index for muscle strain. To further simplify CE computations, we assume that all arm poses are static, since the shoulder must match the gravity torque when the arm is static and the arms torque and angular acceleration are equal to zero.
-
Biomechanical Simulation – The prediction of posture, location, direction, degree, and other factors of human movement often involves intrusive and/or tedious procedures. Fortunately, biomechanical simulation has offered a means of capturing this information and enables cost-efficient estimation of physical ergonomics. It has potential for indicating user fatigue and ergonomics in a non-intrusive manner, which is useful for HCI applications and VR/AR technology. The collection of optical motion tracking data for biomechanical simulation usually involves a mapping of physical to virtual markers, scaling of the musculoskeletal model, adjustment of markers through inverse kinematics, and estimation of the muscle activations [11]. We adopt the method implemented by Belo et. al. [12] in the estimation of muscle activations from biomechanical simulations. This method uses simulations from OpenSim 4.1[13], an open-source tool for biomechanical modelling and simulation, as well as the upper extremity model created by Saul et. al. [14]. Belo et. al. [12] analyses each arm pose over time and then save the timeframe which minimizes the reserve actuation for each pose, which yields an activation value for each muscle and reserve actuator in the model. These values
are then combined into a single cost function to describe the cost of each arm pose in terms of muscle activation.
-
Rapid Upper Limb Assessment – Rapid Upper Limb Assessment (RULA) is a heuristic survey method developed by McAtam- ney et. al. [15] to provide a quick assessment of the postures of the neck, trunk, and upper limbs, along with muscle function and external loads experienced by the body. To allow for easy identification of posture ranges, the range of movement for each body part is divided into sections, which are then numbered; low posture scores reflect postures with minimal risk factors, while higher scores represent more extreme postures and an increased presence of risk factors. We use scores from the study for the upper arm, lower arm, and wrist, which are based on the joint angles of the upper and lower arms.
-
Cognitive Load – A key feature of AR devices is the ability to project computer-generated visuals onto the users real environment. Because the user is still situated in their current environment, there is more consideration of the contextual details associated with this environment in comparison to a fully immersive experience offered by VR technology. The users context may include environmental conditions (e.g. indoors vs outdoors), task, and cognitive load. For example, experimenting in a laboratory with equipment and other researchers would demand a higher cognitive load than sitting alone in an office. In the first environment, the user may desire an interface with fewer visual details in comparison to the second. Cognitive Load Theory [16], which involves estimation of the users workload, is an important aspect of HCI and the development of interactive systems. Generally, designers would want to limit distractions and overloading users with information. However, it is often difficult for designers to be aware of the cognitive abilities of each user and furthermore develop interfaces which can adapt to changing cognitive levels. Current research has explored various methods of inferring the users cognitive load in relation to HCI applications: these methods generally fall into one of three predominant categories, the first of which involve subjective measures such as the NASA TLX [17], a commonly used questionnaire that assesses subjective mental workload on a multi-dimensional rating scale. These subjective measures may be time-consuming and tedious however, and users may forget various details of the tasks they are questioned about. The second category for measuring cognitive load includes physiological measures such as heart rate variability, electromyography, and skin conductance. However, a key challenge with such measures is that they are invasive and rely on physical contact with the user. The final category, eye tracking, offers the best potential for non-invasive estimation of cognitive load. Gaze tracking and pupil dilation have been previously researched and suggested to be related to the mental difficulty of tasks. This idea is often traced back to the study by Hess and Polt [18] demonstrating correlation between pupil size and mental activity in the form of simple multiplication problems. Lindlbauer et. al. [19] adopts this method of computing the frequency of changes in pupil diameter to estimate the cognitive load of the user when interacting with a UI generated with an HTC Vive Pro VR headset. This estimation of cognitive load is used to optimize the UI in terms of the amount of information provided (the level of detail, or LOD). While improvements in accuracy and lowered costs of eye trackers have increased their popularity, eye
tracking methods may still suffer from practical limitations and errors caused by off-axis distortion [20] and ambient light [21].
-
-
Controllable Parameters
We now identify and describe relevant parameters that can be directly set by the designer, also known as controllable parameters. These parameters enable optimization towards design objectives, or specifically, the construction of a UI layout that is adapted to the physical, perceptual, and cognitive abilities of the user.
-
Target Acquisition – The modelling of human movement is a major component of predicting human-computer interaction and ergonomics. Fitts Law, which enables predictive modelling of human movement, is arguably the most commonly used human performance model in HCI. In his 1954 paper, Paul Morris Fitts
[22] proposed a metric to quantify the difficulty of a target selection task. The metric was based on information theory, in which the difficulty of a task can be measured using the information metric bits, and that in carrying out a movement task information is transmitted through a human channel [23]. -
Color Harmony – The use of color in UIs spans across many purposes, some of which include to draw attention to certain elements, label or group items, or visualize similarity or differences between elements. This topic has also been explored in HCI with regards to human ability; for example, Chroma [24] is a wearable AR system based on Google Glass which utomatically adapts the scene based on the type of color blindness and allows users to see a filtered image of the current scene in real-time. Color also has the ability to impact human perception, and certain colors may invoke moods and feelings from the viewer. From an HCI perspective, the coloration of a UI layout may impact the degree to which a user finds the layout aesthetically pleasing or displeasing. When the placement of two or more colors generates a pleasant response, the colors are said to be in harmony. The exact definition of color harmony is not clearly delineated, however. For centuries, artists have studied the balance and positioning of colors that evokes a sense of harmony. These methods have often lacked robust scientific methodology and have been subject to the discretion of the artist, thus creating many different definitions of the concept of color harmony.
-
Text Legibility – UIs designed for AR technology will usually consist of some form of textual content. Although the ability to overlay this virtual content onto real objects in the environment is a hallmark of AR, it may also induce challenges with the placement and design of text for these systems. For instance, text legibility may be an issue due to the interaction between the content and the texture in the background. The switching of the users focus between the real environment and overlaid virtual data is known as competitive see-through [1] and is correlated to the users comfort and usability of the application. Limited text legibility can spoil the AR experience and its effectiveness in conveying content for the user; thus, methods of designing and placing virtual text content have been widely studied in the HCI field. For example, Manghisi et. al.
-
-
-
CONSTRUCTING UI TOOLKIT
When using virtual reality (VR) to use the parameter analysis is crucial for creating a realistic and immersive experience. This analysis focuses on using feedback and other hardware to simulate resistance that the user's body and emotion would feel in the real world. So, using our analyses of the effects of each controllable and uncontrollable parameter on human performance, we may convert each parameter into a predictive model. Fig. 3 displays the toolkits objective function menu with sliders to adjust the weights of each function completed by parameter analysis. As seen the figure, it includes the UI menu with sliders to adjust the weights of each design objective. The UI inspector interface allows the designer to assign constraints for each interface element in the UI. Through manipulation of the cognitive level slider, the UI toolkit enables the designer to specify the level of detail and information displayed in interface elements in the context of potential environments the user may be in while interacting with such UIs. Thus, the developer begins by adjusting the weights of the objective functions based on the parameters that will be most impactful for the target.
Fig. 3. Example of an objective function menu.
Heres a summary of common goals for UI toolkit design as described by Olsen Jr. [26] in his influential work on evaluating user interface systems:
-
Reduce development viscosity: The toolkit should reduce the time to create a new solution. In this case, our toolkit should allow the designer to create an inclusively immersive UI in a shorter amount of time than without using the toolkit.
-
Least resistance to good solutions: The toolkit should encapsulate and simplify expertise by utilising various optimisation methods as well as feedback from the designer.
-
Lower skill barriers: The toolkit should be simple and efficient to use, allowing designers with various skills and expertise levels to design UIs with ease.
-
Power in common infrastructure: The UIs designed with our toolkit should provide users with access to a multitude of abilities and services which would not have been easily available without using a UI.
-
Enabling scale: The variety and number of UI layouts constructed by our UI toolkit in a given amount of time should be greater than those available without using the toolkit.
Using parametric analysis and a UI toolkit in VR is essential for achieving realistic weight perception. Combining real-time analysis of user movements with effective UI design allows developers to dynamically manipulate virtual feedback to simulate the sensation of weight. Parametric analysis is crucial for developing high-quality user interfaces (UIs) in virtual reality (VR). It allows designers to systematically evaluate the impact of various variables on the user experience. Instead of relying on static design approaches, parametric analysis helps VR developers understand how to build flexible, data-driven interfaces that adapt to the user and their unique immersive environment.
Finally, the construction of a model-based UI design toolkit has revealed potential in integrating techniques adapted from design engineering, especially for systems which are complex and costly to validate. By allowing the designer to adjust the objective function weights in multi-objective weighted optimization. This dedicated VR UI toolkit provides a reusable system of modular components that are adjustable based on specific parameters. This enables the rapid development of UIs that are consistent, performant, and optimized for immersive experiences. The construction of a model-based UI design toolkit has revealed potential in integrating techniques adapted from design engineering, especially for systems which are complex and costly to validate. By allowing the designer to adjust the objective function weights in multi-objective weighted optimization.
-
-
CONCLUSION
This paper has successfully explored various controllable and uncontrollable parameters that dominate user performance and comfort when interacting with VR and AR user interfaces. This was conducted via a model-based approach, in which design knowledge in the form of user simulations, models, and/or heuristics are used to model how users interact and perceive UI layouts. We have shown that physical ergonomic models (CE, muscle activation, and RULA), perceptual models (color harmony and text legibility), and cognitive models (cognitive load) affect the users experience.
This parameter analysis will aid in understanding how to improve the accessibility of VR and AR systems for users with varying degrees of perceptual, cognitive, and physical capabilities. Rather than taking a universal design approach, which aims to develop systems for general use with a one size fits all mentality, we have demonstrated that utilizing an ability-based design perspective is beneficial for focusing on ability throughout the design process and can create systems which leverage the full range of human potential. Further, we have designed and implemented a UI design toolkit for constructing 3D UI layouts which can suggest alternative configurations pertaining to user capability to the designer at design time. In the future, we aim to refine our current function models to enable more accurate and efficient creation of UIs, as well as explore and integrate other models to adapt to other user capabilities.
REFERENCES
-
Lee, J. and Lee, K. (2024). LimberUI: A Model-Based Design Tool for 3D UI Layouts Accommodating Uncertainty in Context of Use and User Attributes, HCI International Conference, Proceedings, Part II, pages 29- 40, Washington DC, USA.
-
K. Todi, D. Weir, and A. Oulasvirta. Sketchplore: Sketch and explore with a layout optmiser. pp. 543555, 06 2016. doi: 10.1145/2901790. 2901817
-
Mott, M., Tang, J., Kane, S., Cutrell, E., and Morris, M. R. (2020). i just went into it assuming that i wouldnt be able to have the full experience": Understanding the accessibility of virtual reality for people with limited mobility. In ASSETS 2020. ACM.
-
Blandford, A. (2013). Semi-structured qualitative studies.
-
Dias, S., Diniz, J., Konstantinidis, E., Savvidis, T., Zilidou, V., Bamidis, P., Gram- matikopoulou, A., Dimitropoulos, K., Grammalidis, N., Jaeger, H., Stadtschnitzer, M., Silva, H., Telo, G., Ioakeimidis, I., Ntakakis, G., Karayiannis, F., Huchet, E., Hoermann, V., Filis, K., Theodoropoulou, E., Lyberopoulos, G., Kyritsis, K., Papadopoulos, A., Depoulos, A., Trivedi, D., Chaudhuri, R., Klingelhoefer, L., Reichmann, H., Bostant- zopoulou, S., Katsarou, Z., Iakovakis, D., Hadjidimitriou, S., Charisis, V., Apostolidis, G., and Hadjileontiadis, L. (2020). Assistive hci-serious games co-design insights: The case study of i-prognosis personalized game suite for parkinsons disease. Frontiers in Psychology, 11.
-
Gajos, K. Z., Weld, D. S., and Wobbrock, J. O. (2010). Automatically generating personalized user interfaces with supple. Artificial Intelligence, 174(12):910950.
-
Kristensson, P. O., Lilley, J., Black, R., and Waller, A. (2020). A Design Engineering Approach for Quantitatively Exploring Context-Aware Sentence Retrieval for Nonspeaking Individuals with Motor Disabilities, page 111. Association for Computing Machinery, New York, NY, USA.
-
Flatla, D. R., Reinecke, K., Gutwin, C., and Gajos, K. Z. (2013). SPRWeb: Preserving Subjective Responses to Website Colour Schemes through Automatic Recolouring, page 20692078. Association for Computing Machinery, New York, NY, USA.
-
Todi, K., Weir, D., and Oulasvirta, A. (2016). Sketchplore: Sketch and explore with a layout optimiser. pages 543555.
-
Hincapié-Ramos, J. D., Guo, X., Moghadasian, P., and Irani, P. (2014). Consumed endurance: A metric to quantify arm fatigue of mid-air interactions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 14, page 10631072, New York, NY, USA. Association for Computing Machinery.
-
Bachynskyi, M., Palmas, G., Oulasvirta, A., and Weinkauf, T. (2015). Informing the design of novel input methods with muscle coactivation clustering. ACM Trans. Comput.- Hum. Interact., 21(6).
-
Evangelista Belo, J. a. M., Feit, A. M., Feuchtner, T., and Grønbæk, K. (2021). XR- gonomics: Facilitating the Creation of Ergonomic 3D Interfaces. Association for Com- puting Machinery, New York, NY, USA.
-
Delp, S., Anderson, F., Arnold, A., Loan, P., Habib, A., John, C., Guendelman, E., and Thelen, D. (2007). Opensim: Open-source software to create and analyze dynamic simulations of movement. Biomedical Engineering, IEEE Transactions on, 54:1940 1950.
-
Saul, K., Hu, X., Goehler, C., Vidt, M., Daly, M., Velisar, A., and Murray,
W. (2014). Benchmarking of dynamic simulation predictions in two software platforms using an upper limb musculoskeletal model. Computer methods in biomechanics and biomedical engineering, 18:1 14.
-
McAtamney, L. and Nigel Corlett, E. (1993). Rula: asurvey method for the investigation of work-related upper limb disorders. Applied Ergonomics, 24(2):9199.
-
Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructionaldesign. Learning and Instruction, 4(4):295312.
-
Hart, S. G. and Staveland, L. E. (1988). Development of nasa-tlx (task load index): Results of empirical and theoretical research. In Hancock, P.
A. and Meshkati, N., editors, Human Mental Workload, volume 52 of Advances in Psychology, pages 139183. North-Holland.
-
Hess, E. H. and Polt, J. M. (1964). Pupil size in relation to mental activity during simple problem-solving. Science, 143(3611):11901192.
-
Lindlbauer, D., Feit, A. M., and Hilliges, O. (2019). Context-aware online adaptation of mixed reality interfaces. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, UIST 19, page 147160, New York, NY, USA. Association for Computing Machinery.
-
Mathur, A., Gehrmann, J., and Atchison, D. (2013). Pupil shape as viewed along the horizontal visual field. Journal of vision, 13.
-
Beatty, J. and Lucero-Wagoner, B. (2000). The pupillary system in Handbook of psy- chophysiology. Cambridge University Press.
-
Fitts, P. (1954). The information capacity of the human motor system in controlling the amplitude of movement. Journal of experimental psychology, 47 6:381391.
-
MacKenzie, I. S. (1992). Fitts law as a research and design tool in human-computer interaction. Hum.-Comput. Interact., 7(1):91139.
-
Tanuwidjaja, E., Huynh, D., Koa, K., Nguyen, C., Shao, C., Torbett, P., Emmenegger, C., and Weibel, N. (2014). Chroma: A wearable augmented-reality solution for color blindness. In Proceedings of the 2014 ACM International Joint Conference on Perva- sive and Ubiquitous Computing, UbiComp 14, page 799810, New York, NY, USA.
Association for Computing Machinery
-
Manghisi, V. M., Gattullo, M., Fiorentino, M., Uva, A. E., Marino, F., Bevilacqua, V., and Monno, G. (2017). Predicting text legibility over textured digital backgrounds for a monocular optical see-through display. Presence: Teleoper. Virtual Environ., 26(1):115.
-
Olsen Jr, D. (2007). Evaluating user interface systems research. pages 251258.
