Future space exploration will inevitably require astronauts to have a higher degree of autonomy in decision-making and contingency identification and resolution. Space robotics will eventually become a major aspect of this new challenge, therefore the ability to access digital information will become crucial for mission success.
In order to give suited astronauts the ability to operate robots and access all necessary information for nominal operations and contingencies, this thesis proposes the introduction of In-Field-Of-View Head Mounted Display Systems in current Extravehicular Activity Spacesuits. The system will be capable of feeding task specific information on request, and through Augmented Reality technology, recognize and overlay information on the real world for error checking and status purposes. The system will increase the astronaut’s overall situational awareness and nominal task accuracy, reducing execution time and human error risk.
The aim of this system is to relieve astronauts of trivial cognitive workload, by guiding and checking on them in their operations. Secondary objectives of the system will be the introduction of electronic checklists, and the ability to display the status of the suit and surrounding systems as well as interaction capabilities. Features which could be introduced are endless due the nature of the system, allowing extreme flexibility and future evolution without major design changes. This work will focus on the preliminary design of an experimental Head Mounted Display and its testing for initial evaluation and comparison with existing information feed methods. The system will also be integrated and tested in the University of Maryland Space Systems Laboratory MX-2 experimental spacesuit analogue.
Source: University of Maryland
Author: Massimiliano Di Capua