SmartPhotography: an IOS App for Assisted Photography

Name: Jingwan (Cynthia) Lu
Affiliation: Adobe Research
Phone: 6097591648
E-mail: jingwan.lu.cynthia@gmail.com
Website:
Knowledge Required: • Proficiency in C/C++
• Familiarity with iOS development environment, XCode and Swift
• Basic understanding of computer graphics and human computer interaction concepts
• Basic knowledge of OpenGL ES and GPU shader programming
• Basic knowledge of conducting user testing to evaluate a user workflow design

Recommended Skills (not required):
• Basic understanding of machine learning and deep learning concepts.
• Familiarity with python and deep learning frameworks such as tensorflow, pytorch, etc.
• Experience with working in a large codebase

Motivation: • Mentor students to conduct cutting-edge applied research in the field of HCI and computer graphics
• Test out and prototype new product ideas
• Discover and cultivate talented students for potential hire
Description: Many camera apps allow users to retouch their photos after they take them. However, this focuses only on the last step of the photographic process. We believe this is a missed opportunity to explore creative interaction during capture. An app that guides the user during photo capture has the potential to increase the quality of the source photograph, and, perhaps more importantly, educate the user about photographic principles (lighting, pose, composition, etc.). Some existing apps do try to guide the user to increase the appeal of their photos, but none of them explain or teach the reasoning behind those suggestions.

For this project, we will first focus on teaching users to take better head-shot portraits in both one-person selfie mode and two-person portrait mode. We plan to leverage existing 3D face fitting and lighting estimation technology and implement an interface for an app that scans the user’s face and overlays lighting and pose suggestions onto the image in real time. After the user selects a lighting and pose preference, the app will guide the user to approximate the desired lighting and pose, allowing them to reveal their best selves.
To expand the scope of the project further, time permitting, we will investigate guided full-body portrait photography. In this case, the lighting guidance would need to be extended to take the background of the shot into account and the pose guidance would need to account for full-body motion. Another interesting area to explore is the gamification of the app: For example, we might guide users to match the lighting and pose in a photo that they upload, score how well they matched the photo, and then provide a way for them to share the results on social media.
 
This project presents interesting challenges in the field of human-computer interaction and usability. We will be asking the user to pose and position themselves and other objects in the physical world – what is the best way to clearly and intuitively convey to the user what they should do? Can we “suggest” poses and lightings to the user that flatter their unique facial shape and features? How do we support both the selfie mode and the two-person mode with intuitive and consistent user workflows? And finally, how can we evaluate our success?
Objectives: • Have a complete app with polished UI, implementing at least the one-person and two-person head-shot portrait photography guidance
• Conduct a formal user study to evaluate the success
• Submit a paper to a top HCI conference such as UIST 2018 (deadline 4/4/2018)
Deliverables: • Functional capture app with polished UI design
• Paper draft summited to a top HCI conference
• Potential patent disclosure
Other comments: This is a research oriented project. It is a follow-up of Alannah Oleson’s 2017 summer internship project at Adobe. We would like to request Alannah Oleson, Anne Lei, and Aileen Thai as the team for this project. Three researchers from Adobe Research will serve 1~2 hours per week as their mentors. At the end of the project, we would like to transfer the IP from the students to Adobe so that Adobe will own exclusive IP of the ideas and the code.

   D. Kevin McGrath
   Last modified: Fri Oct 20 09:31:13 2017