UI/UX Designer
Voice Interface Prototyping
Co-Design Facilitation
Usability Testing
VoCreate offers a voice-user interface specifically for art creation, revolutionizing how we approach artistic expression. VoCreate empowers individuals with motor impairments by providing a novel way to bring their ideas and creativity to life through drawing.
My research involved interacting and prototyping with people with Spinal Cord Injuries resulting in severe motor impairments impacting their ability to control their bodies shoulder and below.
Hand splints, robotic prosthetics, and eye tracking allow users to interact with physical and digital devices, it does not allow them to control it accurately and is dependent on caretakers.
Opportunity
An easier to learning and use method
Injuries like Spinal Cord Injury impact the person's ability to control arm, hands, and legs. This impacts a person's ability to use common objects, including pen/stylus, mouse, or keyboard.
Opportunity
โEmpower artists to draw with more control
VoCreate is ย designed meticulously with the input of artists with hand impairments, to ensure that it meets their needs and allows them to express themselves freely.
Users can control the app using their voice, such as vowel sounds to move the brush and paint. By making the "EE" sound the user can move the paint brush to the left and by making the "AW" sound to the right. It's fun, but also useful when drawing longer strokes while maintaining artistic control.
The redesigned tool menu allows users to access drawing tools from anywhere on the screen, making it more accessible than a fixed toolbar. This reduces the need to constantly reach for frequently used tools, saving users energy and breath.
To test the hypothesis, we needed to test a primary assumption: are users comfortable drawing using their voice. To complete this test, I conducted 4 experiments with users, stakeholders, and experts. The goal was to determine how users wanted to draw using their voice. Key insights from these tests involved:
While our users are still used to contemporary art and design tools which are heavily reliant on complete control over what is created, it was interesting to see where such AI tools can help generate more ideas in less time at the cost of losing control. It was important to focus on what is the user's goal?
Drawing with AI transforms the entire concept of drawing from making to thinking. The basic interaction is having conversations with the system and based on those conversations the AI will generate images. Instead of voice mapped to interact with the computer, in this concept am questioning what it really means to think about creating something.
Through user testing and observation, it was revealed that having high control was desired. I tested two main methods of moving the paint brush on the screen:
After several user testing, Vowel Joystick was selected due to following advantages:
Once the interaction method was decided, sketches of the main screen was designed based on users mental model of a drawing tools. This was important to highlight the key features needed to provide a user-friendly experience. Every interaction was modified for voice only input method.
Based on the sketches, wireframes were designed using Figma. The goal was to test if the information and tools available meet user's expectations and metal model and is a viable solution to pursue.
The prototype was tested semi-moderated remote tests with 4 users. The goal was know whether the interaction was easy to understand and met user's expectations. As the entire system was supposed to run with voice, the users were only allowed to speak to interact with the wireframes and I performed the actions. They key results were:
To understand what adaptive tools artists with hand impairments are using, I analysed user experiences of 6 products and services. Splints were the most used adaptive tools, they are wooden or plastic appendage that allows artists to hold a pencil, stylus or brush. However, they are cumbersome to use as they require frequent assistance and do no assist with finer motor movements. Through this analysis I was able to identify key opportunities for innovation.
The first user interface was based on standard design patterns in the arts software industry with a modern look and feel. However, the interaction was not highly compatible with voice interaction.
I redesigned each component to take be more compatible with Voice UX. I worked on improving navigation accessibility, feedback loops, and simplifying the UI resulting in a more usable and desirable experience.