LifePics offers an API for developers to implement, either a new revenue model or their first revenue model into the thousands of photo apps available in the mobile app landscape. LifePics has partnerships with local printers world-wide, as well as their own printing capabilities. Users order photo prints, in a number of forms, either for local pickup from a partner-printer or for delivery to their homes. The developer, LifePics, and the end printer all get a cut from the profits made from the sale.
The LifePics API is designed and developed to be a simple plug-in, offering limited customizations to the color palette and photo sources. This was an interesting project, focusing far-less on an individual brand feel, and more on making the interaction and feel as close to a device's default design. The reasoning that the API needs to fit as seemlessly as possible within a wide variety of apps, but also possibly inspire developers to rethink their apps interaction and design to elevate itself to the API's high quality of UI & UX.
Any photo printing app has a certain number of steps required in order to fulfill an order: pick a product to print, pick photos, crop or edit photos (or don't) choose print
sizes, choose quantities, order / put in cart. It was clear looking at apps out there the order in which you do this is not set in stone, and not any order of steps is necessarily successful. Early on we decided on the idea that it made the most sense to get the user excited about the product and the photos they can print before then funneling them into the less enticing process of picking print sizes and quantities. We also thought it best to avoid cropping entirely by separating out rectangular and square images into two categories to pick size and quantities for. The faster we can get the user to ordering for delivery or pickup the more orders that will actually be completed in the long run.
Having established a good basic approach I set out to lay out all that needed to be on each page and solidify the flow of the user interaction. I tend to work in Fireworks or Sketch for my wireframe production, I'm quick working in both and can quickly produce wireframes that are still adaptable into final comps later on. It is also easy to export images to view on devices to get a good feel for how they work.
I tend to start with far simpler, grey scale imagery in order to free the conversation of any design-related sidenotes. As things come together I will plug in more and more realistic imagery. Nav items and any other device-specific imagery are used from the get go.
Below are my final wireframes used to adapt into final designs. The images walk through the individual user flows within the app:
The below comps show how my original wireframes developed out into final designs. With this particular design problem a lot of work went into deciding what items would be influenced by the primary and secondary colors picked by the developer - you can view my exploration in the first image at the top of the page. Many times, wireframes are used solely as a way of broadly visualizing the user interaction and can change as design comes together. This allows for iteration on ideas as the product comes together; if original ideas prove not to be ideal there is no problem tweaking right within the designs or stepping back and whiteboarding/wireframing a section.
One major design challenge was how best to handle a multi-stepped process, something not necessarily laid out in the available design principle documents. For each page the user can select or edit multiple objects or fields before moving forward, traditionally this might be handled with an out-and-back nav stack or multiple modal popups. The final design used a floating bar with back and forward buttons on the left and right sides, and an info section in the middle highlighting the photos and print numbers tallied through the process. The final solution is intuitive, informative, and simple.