홀로렌즈2용 타입 인 스페이스 (Type In Space) 앱 디자인 스토리

프로젝트 배경

서체 디자인과 타이포그래피를 사랑하는 디자이너로서 저는 2015년 HoloLens를 만난 이후로 홀로그램 서체에 매료되었습니다. 마이크로소프트 홀로렌즈는 물리적 환경에서 홀로그램 개체를 배치하고 볼 수 있게 해줍니다. 실제 물리적 개체와 마찬가지로 테이블이나 벽에 배치할 수 있고 이동하며 다양한 각도에서 바라 볼 수 있습니다.

Typography Insight for HoloLens(2016) 는 공간에서 홀로그래픽 타입(서체)을 사용한 첫 번째 실험 프로젝트 었습니다. 실제 물리적인 공간에서 아름다운 홀로그램 타입을 배치하고 경험할 수 있는 앱을 만들었습니다. 이 첫번째 앱으로 부터 컴포지션 구성 요소를 발전 시켜 새 프로젝트인 Type In Space(2018) for HoloLens로 발전 시켰습니다.

실제 환경에서 홀로그램 유형을 보고 상호 작용할 수 있다는 것은 가장 흥미로운 경험 중 하나입니다. 이제 HoloLens 2의 새로운 핸드트래킹 및 아이트래킹 입력을 통해 홀로그램 타입을 손으로 직접 만지고 잡고 조작할 수 있습니다. 이런 직관적인 인터랙션은 완전히 새로운 가능성을 열어줍니다.

이 이야기에서는 Type In Space 앱을 HoloLens 1세대에서 HoloLens 2로 가져온 여정을 공유하고 싶습니다.

홀로렌즈 2의 직관적인 인터렉션

HoloLens 2는 완전히 새로운 MEMS(Micro-electromechanical systems) 레이저 디스플레이, 더 큰 시야각, HPU(Holographic Processing Unit) 2.0, AI 코프로세서 탑재 등의 기술을 담고 있습니다. 그러나 HoloLens 2의 가장 흥미로운 새 기능 중 하나는 모든 관절이 인식되는 핸드트래킹 입력입니다. 핸드트래킹 입력을 사용하면 홀로그램을 터치하고 잡고 눌러 홀로그램과 직접 상호 작용할 수 있습니다. 이 새로운 입력 상호 작용은 우리가 홀로그램 개체와 상호 작용하는 방식을 극적으로 변화시키고 매우 자연스럽고 '본능적'으로 만듭니다.

핸드 트래킹 입력으로 마침내 멋진 홀로그램 서체(텍스트)를 만지고 잡을 수 있었습니다.

MRTK: 공간적인 인터렉션과 UI를 위한 컴포넌트

MRTK(Mixed Reality Toolkit) 는 마이크로소프트의 오픈소스 프로젝트 입니다.. MRTK-Unity는 Unity에서 혼합 현실 앱 디자인 및 개발을 손쉽게 하기 위한 필수적인 컴포넌트 및 기능을 제공합니다. MRTK v2의 최신 릴리스는 HoloLens/HoloLens 2, Windows Mixed Reality 및 OpenVR 플랫폼을 지원합니다.

MRTK v2는 처음부터 완전히 재설계되었으므로 입력 시스템은 MRTK v1(HoloToolkit)과 호환되지 않습니다. 저의 앱 Type In Space의 원래 버전에서는 HoloToolkit을 사용했기 때문에 이번에는 MRTK v2로 새로운 프로젝트를 만들기 시작했습니다. 대부분의 핵심 상호 작용은 MRTK v2의 컴포넌트들로 달성할 수 있기 때문에 생각보다 빠르게 구현할 수 있었습니다.

UX 요소: 텍스트 오브젝트

텍스트는 Type In Space 앱에서 가장 중요한 구성 요소입니다. 텍스트 개체는 다음 요소로 구성됩니다.

Text Mesh Pro (텍스트 매시 프로)

HoloLens는 47 PPD(Pixels Per Degree)의 고해상도 디스플레이를 갖추고 있어 선명하고 아름다운 텍스트를 표시할 수 있습니다. 이 고해상도를 적절하게 활용하려면 적절하게 최적화된 텍스트 구성 요소를 사용하는 것이 중요합니다. Unity의 TextMesh Pro는 SDF(Signed Distance Field) 라는 기술을 이용하여 거리에 관계없이 선명하고 선명한 텍스트를 표시합니다. Typography guideline and Text in Unity on Mixed Reality Dev Center.

Near & Far Manipulation

Being able to directly grab and manipulate the holographic type is one of the most important core interactions. MRTK’s ManipulationHandler script allows you to achieve one or two-handed direct manipulation.

ManipulationHandler also allows far-field interactions with hand rays in HoloLens 2. You can use two hand rays to grab/move/rotate/scale objects.

Direct two-handed manipulation with MRTK’s Manipulation Handler

Bounding Box

The bounding box is a standard interface for the precise scale and rotation of an object in HoloLens. For the Type In Space app, I used it to indicate the currently selected text objects by displaying the corner handles. MRTK’s Bounding Box provides various configurable options for the visual representation of the handles as well as the behaviors.

Bounding Box in HoloLens 2

UX Elements: Menu UI for Text Properties

Button

The button is one of the most foundational UI components. In HoloLens 2, you can directly press buttons with hand-tracking input. However, since you are essentially pressing through the air without any physical tactile feedback, it is important to amplify visual and audio feedback.

MRTK’s HoloLens 2 button

MRTK’s HoloLens 2 style button provides rich visual/audio cues and handles complex logic for the speed/trajectory/direction of the finger movements. Visual feedback includes proximity-based lighting, highlight box, compressing front cage, hover light on the surface, pulse effect on press event trigger, and the fingertip cursor.

Since it prevents back-pressing, it could be used for overlay menus too. (examples below in the hand menu videos) I used the HoloLens 2 style button prefab for all menu UIs including fonts and colors list.

MRTK’s HoloLens 2 button provides various types of visual feedback

Hand Menu

In the original version, I had a floating menu with tag-along behavior. It followed the user so that it can be accessed anytime. In HoloLens 2, there is an emerging pattern called the ‘hand menu’. It uses hand tracking to display quick menus around the hands. This is very useful to display a contextual menu when it is needed then hide and continue to interact with the target content. To learn more about the hand menu, see Hand Menu design guidelines on Mixed Reality Dev Center.

I started using this UX pattern for the text property menus and tested them. Below are the Mixed Reality Capture videos from HoloLens 2 device.

Hand Menu for the text properties

As you can see, it works quite well. You can quickly raise your palm to display the menu and change the properties. Since it could cause muscle fatigue if you want to change multiple properties and adjust details, I have added an additional option to make the menu world-locked. Tried a pin button and drag-to-pull-out behavior.

Hand Menu Explorations — Pin / Grab & Pull to world-lock the menu

The hand menu worked well with the target text object in the near-field interaction range. However, I found that it becomes problematic for far-field interaction. Since I had to look at the target text at a distance and the hand menu in the near field, my eyes had to continuously switch focus/depth. It quickly caused eye strain.

Focal depth switching between the target object and the menu causes the eye strain

A solution could be attaching the text-related menus to the target text object. However, for the text far away, to make the menu usable with a pointer (hand ray or gaze cursor in HoloLens 1), the size of the menu should be very large. It will visually overwhelm the target text which should be the hero object of the experience. (Content is King, Content before Chrome — sounds familiar? :)) I also wanted to keep the menu directly interactable with my fingers when I interact with the text objects in the near-field.

In-Between Menu

My solution was to place the text property menus between the target object and my eyes(headset). Of course, closer to the target text object, since the goal is to minimize the focus/depth switching. After playing with the values, I made the menu placed 30% far from the target object. (70% far from my eyes) This allowed me to directly interact with the menus easily with the text objects in the near-field. This menu positioning/sizing also works well with HoloLens 1st gen’s smaller FOV size.

Minimized focal depth switching between the target object and the menu. The menu automatically scales up/down to maintain the target size based on the distance.

The menu automatically scales up/down based on the distance to maintain the constant size

Fortunately, one of the MRTK’s awesome Solvers called ‘InBetween’ provides the positioning mechanism. Using InBetween Solver, you can easily position something between two objects and specify the distance % ratio between those two.

To maintain the target size of the menu regardless of the distance, I used MRTK’s ‘ConstantViewSize’ Solver. As you can see in the video, the menu automatically scales up when it moves far and scales down when it is moving into the near-interaction range. This makes the menu easily interactable with both direct finger press and hand ray pointer + air-tap/pinch.

In-Between menu works well with near interactions too

UX Elements: Main Menu for the global features

Main Menu

For the buttons that provide global(non-object specific) features, I left them in the hand menu. As described before, you can easily grab and pull out to make the menu world-locked.

The main menu for the global features

The main menu includes:

  • New Text | Clear Scene | Save & Load Scene
  • Spatial Mapping | Mesh Visualization | Snap to Surface
  • Physics: Rigid Body | Kinematic | Slider UI for gravity(force)
  • Grab & Duplicate Toggle
  • Random Composition
  • About

Below is the latest design iteration result, a compact version with automatic world-lock on hand drop. The world-locked menu makes it easy to interact with multiple UI controls.

UX Elements: Annotation

Being able to place text objects in the physical space means that you can annotate physical objects. To help visual connection to the target object I have added an optional line and sphere component. By simply grabbing and moving the text object(anchor) and the sphere(pivot), you can easily create annotations with a connected line between the text and the sphere. For this feature, I used MRTK’s Tooltip component.

Annotation Feature

UX Elements: Spatial Mapping

Spatial Mapping is one of the most exciting features of HoloLens. With Spatial Mapping, you can make holographic objects interact with the real-life environment. In the original version of the Type In Space app, I used the Gaze cursor to move the text on the physical surface. In the new version of HoloLens 2, I was able to use the hand ray’s endpoint to attach the text and follow the surface. The placing and moving states are toggled by air tap.

To use Spatial Mapping in MRTK, you can simply turn on the Spatial Awareness feature. This gives you the spatial mesh of the environment. Surface Magnetism solver is a convenient utility that can make any objects snap to the surface.

UX Elements: Physics

With Spatial Mapping, I have physical surfaces that I can use. How about applying gravity force to make the type fall or fly and let them collide with the physical environment? I already had a simple ‘gravity’ option in the original version. In the new version, I have added a slider so that the user can control the amount of force as well as the direction.

Physics options

UX Elements: Text Input with keyboard and speech

HoloLens 2’s direct interaction with holograms dramatically improved the text input experience too. Just like using the physical keyboard, you can use your fingers to type in the text with the holographic keyboard. Of course, you can still use the speech input for dictation. The system keyboard has a built-in speech dictation input button. MRTK provides great examples of using system keyboard and speech input.

Keyboard and speech input

Below is an example of using Dictation input. The dictation service is provided by Windows.

UX Elements: Grab & Duplicate

The original version had a simple duplicate feature that allows you to quickly create a new text with the same text properties. To make the duplicated text visible, I made the duplicated text placed with some offset position. This created an interesting visual effect with an array of instances.

I have modified this feature to make it duplicatable by simply grabbing and moving with the hands which is much easier than keep pressing the duplicate button. World-locked trails of holographic text look gorgeous.

Grab & duplicate feature
Layout example

Supporting HoloLens 1st gen and Windows Mixed Reality VR devices

One of the benefits of using MRTK is cross-platform support. MRTK’s interaction building blocks and UI components support various types of input such as HoloLens 1’s GGV (Gaze, Gesture, Voice) and Windows Mixed Reality immersive headset’s motion controller.

The text properties menu (In-Between menu) works well with GGV input without any modification. Since it is always displayed on the right side of the currently selected text object, even with HoloLens 1’s smaller FOV, it works well. Interacting with the motion controller’s pointer in VR also works well.

Since I cannot use hand-tracking for the hand menu, I made the Main menu a floating tag-along menu with a toggle pin/unpin option. Other than this small modification, I was able to publish the app for both HoloLens and HoloLens 2 with a single Unity project.

Type In Space on HoloLens 1st gen with Gaze & Air-tap

MRTK’s HoloLens 2 button provides various types of visual feedback

ko_KRKorean