A User Facial 3D Model Generator for designers.

An academic computational design project done at Yale University

 
 

Time

Sept 2020- Dec 2020

Tools

Processing, Face OSC, Grasshopper, Rhino

Team

Design team: Isabel Li

Instructor: Micheal Szivos

My Role

Designer
Programmer

Overview

 

About FaceArch

FaceArch is a software-based design product that allows users to design 3D objects with their faces. FaceArch uses the VR modeling concept application allows users to apply their facial interactions as a 3d modeling tool. With this product, we imagine the dynamic interactions between humans and 3D modeling in the future. This product eliminates the pain points of having to merely use a mouse to make 3D models and add multimodality to the application. For example, architectural designers can design a building with their facial expressions and create 3D building prototypes that contain feelings, respond to different facial expressions, and produce respective characteristics.

For this project, my main contributions were creating product scope, designing key interactive prototypes, programming, recording, and implementing the application.

Project Goals

 

The future of 3D modeling with human faces

FaceArch provides an immersive modeling experience through using facial expressions in a virtual 3D modeling interface. This project responds to the current working habits and design processes. We create a face 3D model generator to explore the dynamic interactions between 3D modeling and humans. We use facial expressions as a new mouse for designers to produce and design 3D massing models in Rhino. We recognize this project with the possibility to be implemented in different scenarios and different user groups for a more interactive design process.

 

Solution

How we got here

Design Problems

 

This emotional architecture generator explores the dynamic interactions between architecture and human. We intend to create architectural prototypes that contain feelings, respond to different facial expressions, and produce respective characteristics.

Inspiration

 

Virtual Reality & 3D CAD Modeling

This project evolves from the concept of VR modeling. There are increasingly new ways of digital modeling today with the use of VR and other technologies. This is a huge improvement for inclusive design for all kinds of users who are not familar with 3D modeling softwares.

Understand our posible users

User Insights

We identified multiple possibilities in terms of its implication in the industry:

  1. For people who have limited knowledge of Rhino or other 3D modeling software, this product provides a better and more straightforward computer modeling experience.

  2. For people who always sit in front of computers, this product can help with developing a healthier working habit.

  3. It can also be used as an interactive installation in museums to communicate with people who are new to architectural design.

  4. It could be modified as a game feature

 

How we built it

 

We used FaceOSC + Grasshopper. We used processing to make our coding system and implemented into Grasshopper to visualize our product.

 

Design Explorations

 

We continuously iterated our designs through rigorous weekly feedback.

Scheme 1: Point → line → extrusions → Facial expression changes the point Z-Depth

Scheme 2: using point-net to map the face. Use facial features and organs as our Parameters to control the 3D Points Zdepths

A Grid system

Point-net → surface → extrusions → Facial expression changes the point distance

 
 

System of Operation

How we assign actions to different facial expressions?

Instant data (mm) collected from human face:

  1. Distance between two eyebrows

  2. Length of the nose bridge

  3. Width of the mouth

  4. Height of mouth

  5. Distance between two eyes

3D Languages and Components:

  1. Volume/Box

  2. Planes

  3. Sphere

  4. Negative Volume

  5. Cylinder

Assign data to architectural prototypes for motions

  1. The height of the mouth determines the height of the box

  2. The distance between the eyebrows controls the subdivision of the box

  3. The location of Chin determines the domain of one box

What I learned

 

It was the first time I designed VR-related applications. I am impressed by the capability of using these platforms to create more interactive 3D experiences and explore the possibilities of VR modeling. I think that this could become an innovative platform of technology that empowers humans to imagine the previously unreachable.