Museum of Type for Mixed Reality

Last year, with my love of typography, I have explored spatial text layout in the physical space and introduced two apps for HoloLens.

With Windows Mixed Reality immersive headsets, I have continued my journey, exploring the possibilities of the typography education in the virtual space.

Windows Mixed Reality headset with motion controllers (Image credit: DELL)

Museum of Type is a virtual museum where you can explore and learn about historically important typefaces. With motion controllers, you can pick up type and observe its detailed shape and characteristics. I wanted to share my design and development story.

*I work at Microsoft as a User Experience Designer. Museum of Type, Typography Insight for HoloLens and News Space are my personal projects. All experiments and related opinions are my own.

프로젝트 배경

Typography, all the details, and classification of type can be a dry subject and could block the way to learning for many young people and students in this digital era. Books are great but often books on typography get left on the shelves. With mixed reality experience, I wanted to bridge that gap by making the subject come alive, by making it more tactile, physical, and interactable. Typography education leveraging new media and interaction design has been my consistent research topic as you can find in my previous article.

Typography Insight for HoloLens (2016)

When I first experienced virtual reality, I was really excited about the fact that I can construct my own virtual environment at any scale. With this interest in the virtual environment, I started thinking about an imaginary museum space where I can walk around, observe and learn about historically important beautiful typefaces.

Sketches

I started sketching out some ideas about the environment. Since this will be a space for type, I wanted to make it a clean, undecorated environment where a user can focus on the content: beautiful typefaces and their history. Because of this, I started with simple planes, and cubes with pure white color to construct the environment.

Sketches for the environment and content

Setting up the tools and environment

Since I learned the importance of experiencing my design in the device from HoloLens app design, I quickly started creating this environment in Unity to see how it feels like in the actual headset. These are the required tools for Windows Mixed Reality app development.

Useful building blocks and example scenes in Mixed Reality Toolkit
  • PC and Windows Mixed Reality headset with motion controllers
    You can find Windows Mixed Reality headsets from Microsoft Store. For the PC, I used my DELL nspiron 7559 laptop which was used for my HoloLens app development. It has an Intel Core i7–6700HQ with NVIDIA GTX 960M. Even though this is not a high-performance gaming laptop, I was able to develop apps for Windows Mixed Reality without any performance issue.

Design & Development iteration with Unity and Mixed Reality Portal

I started using Motion Controller Test scene in MRTK since it had basic motion controller and camera setup for Windows Mixed Reality. This scene already includes basic teleportation and locomotion with motion controllers. Because of this, you can easily create something in Unity editor and move around with motion controllers wearing the headset.

Motion controllers paired and ready in Mixed Reality Portal

To experience your scene in the immersive headset, open Windows Mixed Reality Portal app and make sure your motion controller is paired and working well in the cliff house.

While Mixed Reality Portal is running, simply hit Unity’s ‘Play’ button to enter the game mode. This mode allows you preview your scene without building/deploying your project. Now put on your headset and you will see your scene in three-dimensional space with your motion controllers. Using this Unity’s game mode preview, you can quickly experience and iterate your design ideas with your headset. Always, experiencing and testing your design in the headset is very important in MR app design and development. It could be quite different from what you imagined and designed in the 2D editor.

When you click ‘Play’ in Unity, your scene is launched in your in your headset instantly

Constructing the virtual museum environment

I started laying out the walls using Unity’s cube game object and added text content in the space. With some iterations, I was able to find right size and distance for the objects and text size in three-dimensional space. I also played with lighting and material for the walls to get a proper amount of contrast with shadow. Since I am not familiar with shader programming, I picked up one of the materials in MRTK and tweaked it for my environment.

Layout and locomotion testing in Unity

In Unity’s game mode, you can still click the scene tab and select/modify any object in the scene hierarchy. You can see your updates in real-time in the headset. This was really helpful for fine-tuning detailed color, size, and the position of the objects.

Initial test of the skybox, color, material and text rendering in space
Initial test of the skybox, color, material and text rendering in space

In virtual reality world, the skybox is an important element that impacts your app experience. You can simply think it as three-dimensional background. Since it is surrounding your entire space, you can use 360-degree images as a texture for the skybox.

The floor is also an important element in virtual reality experience. It makes the user feel safe and grounded. The floor is included in the MixedRealityCamera.prefab in MRTK. I just updated the color of the material for my app experience.

Building content and environment in Unity

Direct manipulation with motion controllers

After establishing basic museum environment, I started placing some of the sample type glyphs in the space. I wanted to make it grabbable so that user can pick up, hold and observe its detailed shape. Usually, we don’t have a chance to observe 2D type in a three-dimensional way in real-life. As a type lover, being able to grab type and observe it from a different angle was a very exciting experience.

Grab Mechanics Example scene in MRTK

To make the type grabbable with motion controllers, I used grabber scripts from Grab Mechanics Example scene in MRTK. This example scene contains useful scripts and prefabs for grabbing and throwing objects with motion controllers. Just simply assigning GrabbableChild and ThrowableObject scripts made my sample type glyphs grabbable with motion controllers. Direct manipulation with motion controllers is one of the most exciting and fun interactive experience in mixed reality.

Since the motion controllers do not have a rigid body or box collider in default, you cannot grab objects directly. Grab Mechanics Example scene uses two cubes as controllers that have a rigid body and box collider. To make default controllers work with grabbing behavior, you can attach these controller cubes to the controllers. Of course, you can hide the visual of these cube by simply changing the opacity with ‘fade’ option in the material.

Rigid body & box collider (blue box) attached to the controllers for grabbing objects
The object also needs a box collider(green outline) to make it grabbable

Interacting with type with motion controllers is fun!

Optimizing text rendering quality

I have introduced some of the technics for optimizing the text rendering quality in Unity (3DTextMesh and UIText), in my previous article. Now Unity has new text component ‘TextMeshPro’ which enables crisp text rendering regardless of the distance and size, using SDF(signed distance field) technique. It shows rounded stroke edge issues on very large scale text but works well in normal size. I used both 3DTextMesh and TextMeshPro to display text content.

Used TextMesh Pro for the rendering quality and multi-line text display

3DTextMesh requires proper material assignment. When you change the font in the editor, Unity reverts the material back to default ‘Font Material’ which does not support proper occlusion. You can find the instructions for creating and assigning proper font texture and material in MRTK.

The glyphs behind the wall visible through because of incorrect font material that ignores z-depth

Attaching UI to the motion controller

As the museum grows bigger with more content, I realized that I need some kind of method for easily jumping to different sections. On the left motion controller, I added simple menu interface which makes it easy to teleport to a specific section. One of the benefits of attaching the UI to the motion controller is that user can always access it easily anytime, just like a wristwatch. It is a great place to put user interfaces for quick action.

Holographic Button in MRTK’s Interactable Object Example scene

For the menu system, I used MRTK’s Holographic Button which can be found in the Interactable Object Example scene. It contains predefined visual states and animations for different input states such as idle, ready, and pressed. It supports both HoloLens’ gaze and gesture input as well as the immersive headset’s motion controller pointer input.

In the Interactable Object Example scene, you can also find a great example of using an ‘Interaction Receiver’. The Receiver makes it easy to manage input events from multiple Interactable Objects in a single script. It is especially useful for the menu system where you have an array of multiple buttons.

To construct the menu’s layout, I used MRTK’s Object Collection script which can lay out an array of objects in three-dimensional space in specific surface type and spacing. The below example shows how I used it to make a curved layout for the multiple-button objects.

Cylindrical grid menu layout with Object Collection script

Initial test for menu layout and pointer interaction

I attached the menu to the left motion controller using AttachToController script in MRTK. With AttachToController script, you can easily specify handedness(left or right) and the element of the controller that you want to attach to.

Using the pointer of the right controller, you can point and select the menu. Since the menu could be disturbing in grabbing interaction, I added show/hide animation and assigned it to motion controller’s menu button event. The user can show/hide by pressing the menu button.

Show & hide with menu button press event. Teleportation behavior.

Intro scene

With the museum layout and content ready, I added simple intro scene with an animated logo. I used Blender(free open-source) to create 3D text logo and imported into Unity. In Unity, I used Animator and Animation clip to achieve a simple animated fade-in effect. Unity’s keyframe animation with a timeline is very similar to other timeline-based application. You can easily understand how to use it if you are familiar with After Effects or Flash.

Creating 3D logo in Blender. Adding animation keyframes in Unity.

MRTK’s HoloLens 2 button provides various types of visual feedback

In the intro scene, the user can overview the museum in a bird’s-eye view and see five different sections based on type classification categories.

Overview of the museum with section indicators

Tooltip for the motion controllers

One of the important aspects of the first-run experience in mixed reality app is the introduction of the input method and button mapping. Since the user can get lost in a fully immersive virtual space, it is important to clearly communicate how to interact with the world in your app experience. Especially if your app is using customized button mapping, it is crucial to introduce button mapping information either through tutorial scene or tooltips on the controller.

In Museum of Type, I used standard default button mapping:

  • Trigger for Select
  • Thumbstick for Teleport and Locomotion
  • Grab button for grabbing and releasing objects
  • Menu button for displaying menus
Default button mapping information — Mixed Reality Portal

To clearly show available button interactions, I created simple tooltips attached to the controllers. They are pointing to specific buttons and explaining their behavior.

MRTK’s HoloLens 2 button provides various types of visual feedback

Type Playground

At the end of the museum experience, I created a space where the user can observe and play with type. It is still in early stage but I want to bring additional features including font and color options, just like Typography Insight for HoloLens. This will become a place where the user can experiment with historically important typefaces in 3D space.

Type Playground

Adding 3D app launcher

In Windows Mixed Reality, you can create a three-dimensional object which can be used as an app launcher. It could be a logo or 3D object that can represent your app. This object can be organized in the cliff house, just like other 3D objects. The 3D model asset should be exported as glTF 2.0(.glb) format. You can find detailed design and development guideline on this pageThis page also introduces detailed steps and examples.

Creating 3D model in Blender / Testing the model in the cliff house
Default 2D app window vs. 3D app launcher

3D launcher can be manipulated just like other objects in the cliff house

Publishing app to Microsoft Store

App submission process is similar to 2D Universal Windows Platform app. You just need to specify correct device type and input method: Windows Mixed Reality immersive headset and motion controllers.

Microsoft Dev Center Dashboard

Since you can include a video trailer, you can demonstrate important experience with a video capture. To capture a video, you can use video recording feature in Mixed Reality Portal. Simply click the ‘Video’ icon in the Start menu or say “Hey Cortana, start recording”. Since this voice command also works in your app experience, you can easily take a picture or start/stop video recording.

Store submission with screenshots and videos
You will be able to see your app in Microsoft Store

Other Stories

ko_KRKorean