Category: UX Case Study

BirdsEye Feature Filter

eBird-filter-3    eBird-filter    eBird-filter-2


BirdsEye bird finding app is one of the best apps for bird enthusiasts to look up birds found in their area. They are about to introduce a feature based bird identification filter, which would let users out in the field narrow down the search for bird’s name by describing its color, size, and the kind of habitat in which they spotted the bird.

My role

I worked with a visual designer and with the developers and stakeholders to provide a intuitive and easy to use UX for this new feature.


I was completely new to the world of birding, and I spent some time familiarizing myself the current app, as well as the newly proposed features. The actual structure of the proposed feature was fixed – 3 filters, one each for habitat of the bird, color, and size. We conducted semi-structured interviews with long-time app users as well as new or non-users throughout the design process.

I also studied similar features on other bird finding apps, to understand their processes and user expectations of the community as a whole

Design-test cycles

First design iteration

With initial wireframing, we were trying to answer some questions such as:

  1. Should the filter be a separate page, a popup window, or a slide-out filter?
  2. What is the logical place for a user to look for a filter like this?
  3. What should be the position of the filter, and how should it expand and collapse?

Filter Design: Interviews revealed that new or amateur users often didn’t have any clue about the bird name or family, and tended to scroll through the list of all nearby birds until they found something that looked familiar. The photos helped, of course. This meant that dynamic filtering as they made their choices would be really helpful, something the developers of the app also quickly agreed with. We thus decided to go with the same-page slide-out filtering.

Home page and slide out filter panel
IDWizard_x_home   IDWizard_x_search_by_id_color_blue

Launch the filter: The page already had a search option, and we decided to merge the new filter with the search, with thought process that if a user didn’t know the name, they would want to search by the characteristics of the bird instead.

Clicking on the search icon brings up options for two ways to search
IDWizard_x_home  searches_alt3

Filter behavior: We also realized that users might want to scroll or just get the filter out of the way when they wanted to see the results. We still wanted to show a short summary of their selections so they knew what they had filtered by. We opted for a icon-based summary based on the icons provided by the visual designer.

Filter states: expanded versus collapsed
IDWiz_wht_col_sel  matches_wht

After hashing out some more details, we shared an initial invision prototype with the stakeholders and some users. The images below show a walkthrough of our prototype:

 Clicking on search and tapping “Look Up” launches filter
IDWizard_x_home    Filter slides out: tap blue

Selecting colors, and tapping on “size” to filter by size  
IDWizard_x_search_by_id_color_blue  IDWiz_wht_col_sel  IDWiz_wht_siz

Selecting size, habitat, and collapsing the filter
IDWiz_wht_siz_sel  IDWiz_wht_hab_sel  matches_wht

First round of testing

Qualitative testing with the invision prototype revealed two important findings:

  1. The location of the filter – would a filter at the bottom be better? We initially had the filter at the top because hierarchy of information meant it made more sense there, plus the button to trigger it was at the top. But this made for a worse experience as users played around with the filter and had to remove their hand in order to see the results.
  2. The 2 step process to launch the new feature was cumbersome as well as unintuitive: The app currently has a search by name option, and this would be a new “search by features or looks” option, so we merged the two. But we realized that most users used ‘search by name’ mainly to find out when a particular bird was last seen in the area and ‘search by characteristics’ only when they saw a bird and didn’t recognize it. So these two have fairly different use-cases and should be presented separately. This considerably influenced our further design prototypes.
Further design and test cycles

See our latest invision prototype here.

After a few more rounds of quick prototyping, we finalized the filter launch behavior: an icon alongside the search. This made it easy to launch, and had some association with the regular search but not dependent on it. We also went with the filter at the bottom concept, and made some changes accordingly.

Before (2 step) process to launch filter at top
IDWizard_x_home    init_IDWiz


After (1 step) process to launch filter at bottom 
base_screen_v2_2   IDWiz_col_v2

We wanted to test whether closing the filter as users started scrolling would be a good idea. I quickly created two framer prototypes and we ran them by a few users as well as our the app developers – everyone seemed to like the one where the filter automatically collapsed into the summary line.

See a comparison of the two behaviors created using framer:

One: filter doesn’t collapse when scrolling and two: filter collapses when scrolling



Future work

We provided all the assets, invision workflows to the developers and the updated app will be launched soon. In terms of future work, we want to address the following issues we found during usability testing:

1. The size section was confusing for many users, as there was no reference of what small meant or what large exactly meant. We are currently working on new and more intuitive designs, such as a slider with examples of small and large birds at the two ends.

2. We would like to go from filter -> summary in a more visually connected way, so that users can associate the summary icons easily with the original icons.

Dashboard design

As a part of the HTML5 redesign of MicroStrategy’s self service analytics product, I was responsible for UX design for all visualizations including line, bar, area, bubble graphs, heat maps and network graphs. This write-up captures the UX design process of adding dashboard formatting and customization capabilities to the product.

Biggest challenges

  • 100+ properties in the legacy software: We analyzed, filtered, sorted and presented an ideal subset of these.
  • Keep existing power-users happy with flexibility, but make the core experience simple and clutter-free for new users.


  • Understand the what and why of dashboard customization through user interviews, card sorting exercises, studying existing customer dashboards, best practice for dashboard creation, frequently used properties, background of the legacy software
  • Based on this understanding as well as the product vision, ideate and create wireframes: iterative design process
  • Create simple working prototypes for user-testing smaller hypothesis, and end-to-end user studies at main checkpoints in the development process
  • Collaborate closely with visual designers and developers at every step to ensure fit-and-finish

What I did: User research | Card-sorting | Wireframing | Prototyping | User-testing


Background and vision

One of the biggest changes between the earlier version and the new proposed version as to enable flexible and powerful dashboard creation capabilities: being able to format the colors and styles on the dashboard, and add custom selectors and images and text. These capabilities are required for many different reasons – company specific branding, personal aesthetic preferences, more effective visual storytelling through customized colors and styles. While the full feature product MicroStrategy has does offer all of these, it is a product aimed at the BI and IT departments, not self-service analytics.
During this project, I worked closely with a UX designer and a UX lead, program managers, and quality engineers as well as developers across 2 different countries.
The project kicked off with multiple stakeholder interviews – we talked with members of product management and marketing teams to understand the big picture vision.

User Research

Exhaustive list of formatting options
Exhaustive list of formatting options
Users and their roles (blurred out)
One of the big advantages while doing UX research at MicroStrategy is that many of our users are internal – our consultants, of course, but also teams such as finance and sales use the microstrategy BI suite. Preliminary user research included semi-structured interviews with internal users and an analysis of dashboards they had created with microstrategy’s earlier (feature-rich but hard to use) BI tool. To get some background about non-internal users, we analyzed our enhancement request system to see what features were most requested.
This phase also included competitive analysis where we partnered with our product teams to understand what kind of features are typically provided. In addition, we studied good dashboard design practices – one of the goals of our project was to make it easy for users to design good and useful dashboards with ease, by providing great defaults. We ended up with a large list of exhaustive properties that users currently used, our earlier product offered, or were requested by users. We also came away with a fair understanding of what kind of users typically need what kind of formatting and customization capabilities – which we kept refining over the iterative design phase.

Iterative Design Phase

With a good understanding of the typical workflows currently used by our users, we dove into our design phase.
The first step was trying to classify the exhaustive list of properties based on user research. There were more than a 100 different properties across different visualization!

Card sorting of the properties
Dashbaord-design-categories 1
Properties sorting based on object


Through multiple brainstorming and mapping sessions, we classified the list along the following lines:
(1) Properties that help analyze (all self service users) versus those that help beautify (dashboard design users)
(2) Properties classified by the type of object they belonged to
(3) Properties that are used very commonly by all most users versus those that are relatively uncommon
We conducted feedback sessions with internal users to update these categories and cut down the list. We also conducted formative usability tests to see if our internal users understood the categorization.


The second step was putting it all together into coherent workflows. We studied how a variety of products across different domains let user format – Microsoft word on one hand, keynote like applications on the other, our competitors as well as dedicated graphics programs like Adobe photoshop and illustrator.

Initial Wireframes – Different Ideas

This slideshow requires JavaScript.


I worked closely with the other UX designer on the team and we quickly sketched out multiple different ideas and workflows. These included
1. A dedicated mode for formatting where the users wouldn’t really interact with the dashboard in other ways (such as sorting or filtering) and would instead focus on just “beautifying”. This was based on our user research which indicated that users thought about analytics and dashboard design as 2 separate steps of the process. As such, we proposed an idea to completely separate these two. with over 50 properties available to customize each visualization, it also made sense that we tuck them away in a specific mode and only show them if the user really wanted to change something about the looks – colors, opacity, styles etc.
2. Formatting panel versus a popup – we proposed a panel to be used which could be collapsed in order to keep the experience lightweight.
3. Commonly used properties always available in context menus. This was based on user interviews and research pointing towards common properties used most often – changing the color of a particular shape, showing data labels, hiding the legend, etc.
4. Context sensitive toolbar – as against the initial formatting mode idea, this concept was about having a context sensitive toolbar that changed based on what kind of object you were working with (lines, text, shapes, etc.) This was discarded fairly quickly because formatting, although important, is not a part of the core functionality of any analytics tool.
5. Toolbar for quick formatting within the context menu: We proposed to use context sensitive toolbars that would appear if the user right clicked on an object and selected an option to format it.
6. Preset styles and colors that could be quickly applied

The develop – test – design cycles

Development sprint 1: After hashing through our ideas with PMs and getting some preliminary feedback from internal users, we chose the workflow with a formatting mode and provide commonly used properties at all times via RMC. Design-wise, we chose to go with a panel of properties, classified according to what part of a visualization they helped format. There were other details such as highlighting and reverse-highlighting between the panel and the object being formatted, context sensitive toolbars in formatting mode, and global quick formatting for all visualizations on a dashboard.

Through the development – test phase, we learnt that no amount of preliminary user interviews can replace the value of getting even a semi-working prototype in front of users to get their feedback.

The biggest feedback we got was that even though users in theory like the idea of a separate mode, while actually using it, they found it too heavy and a little confusing. We created some quick interactive prototypes in keynote based on a lot of this feedback and conducted multiple rounds of mini-usability tests.

Development sprint 2: Based on user feedback from the usability tests, we completely got rid of a separate formatting mode, instead focusing on lightweight formatting capabilities on demand. The user could right click and select format on any of the different parts of the graph to be shown the toolbar instantly. They could also see the exhaustive list of properties in the panel for that object. Clicking anywhere would quickly dismiss the toolbar, keeping the experience lightweight. We also added selection and hover highlights at all times to give adequate feedback to the users as to what exactly they were formatting.

The example below shows our final workflow for axis labels. It showcases some of the complex problems we were dealing with – formatting is only a secondary action, the primary action is analytics – changing the axis scale, origin, sorting, etc. The highlights and workflow try to make this distinction as clear as possible.

New workflow after user feedback and usability tests


The devil is in the details

Although that captured most of the big-picture design process, there were multiple smaller details that we had to iron out along the way.
1. Getting the highlights right: We highlight the object is being formatted to give feedback to the users. These highlights needed to be clear, able to show the effect of making changes, be visually coherent as well as make conceptual sense.

I worked with the visual designer on our team to get the highlight effects right based on our workflows, and worked with developers to ensure they were not too high-cost to implement

Overview of the effects
Overview of the effects

2. Mini complex workflows: There were a lot of minor workflows that needed to be tweaked, optimized for common use-cases and in general be simple for the user to understand (even though complex to implement).

One such example which didn’t really end up using was about selecting data elements by single clicking, double clicking or lassoing. Single click was considered as a high-level selection (all elements of a particular color) and double click was considered low-level selection (an actual single data point).Shape formatting new_Page_1

The formatting capabilities received a lot of good feedback during the beta testing phase of product launch, and we are continuing to iterate on the workflows going forward.

MicroCharts Widget – Design for the iPhone

The microcharts widget is one of the most popular, data-dense widgets that MicroStrategy offers.
Some of its salient features include a sparklines and sparkbars, bullet graphs, and an outline tree mode.

My job was to design this widget to work well on the iPhone – and come up with a design to keep it data-dense yet clutter free!


The existing web widget looks like this:

The redesign process started with a study of existing widgets MicroStrategy offered for mobile devices, the competitive landscape, and cross-team discussions with consultants, tech support engineers and product management to understand the mobile use cases.
We concluded that:

  • Customers will typically use this as power-version of a regular table, and as such, would expect an interaction rich widget. All of the main functionality needed to be in there, we couldn’t remove too many features that existed in the web or iPad versions.
  • Even though there were design as well as functional challenges with supporting an “outline” mode, we would need to figure out ways to resolve these as it was a very frequently used feature and make a smaller display that much more useful.

Design Challenge 1

Support multiple columns in a user-friendly way

Users on an average have between 2-7 columns that display metrics in different formats: simple numbers, sparklines, or bullet graphs. There are different established interactions patterns to deal with such cases – showing just one column at a time that can be toggled, swiping through multiple columns, fixing the leftmost (attribute) columns and horizontally scrolling the overflow.

We chose the 3rd option and added some design tweaks to make the customer’s job easier:

  1. The column sizes can vary, in order to support graphs and number gracefully. This enables them to see multiple metrics on the screen at the same time and compare them, as long as they fit.
  2. Customers can quickly pick a density setting – normal, high or low, and we calculate the column sizes for them.
  3. Customers can perform a “slow drag” to move the columns gradually or a “quick swipe” to move the next column to the leftmost position within the non-fixed panel.

Design Challenge 2

Support a tree-hierarchy with easy and intuitive opening and closing of the outline structure.

The web version of microcharts uses indentation for the outline mode. For the mobile version, we did not want to use indentation due to space constraints as well as general ux. We went with a visual separation showing the hierarchy with subtle drop shadows.

We also supported gestures for quick opening and closing of an entire level – pinch open and close.


Kinect the Dots

Kinect the Dots is an interactive storytelling application designed for a classroom experience for children with autism. The teacher narrates the story and children can paint and interact with it through gestures.
What I did:   Ethnographic research     Design & storyboarding    Development      User testing



While conducting research for another project, we had a chance to observe the interactions between teachers and children at the Lionheart School for children with autism. We saw their therapy sessions, their use of technology such as smartboards and observed what did and didn’t engage the children. As we did some online research, we came across encouraging reports of the use of Kinect as a part of therapy. We wanted to take this a step further, and create a customized application for children with autism.

Our vision was simple: this would just be another tool in the toolbox that teachers and therapists have – one that leveraged technology to engage rather than alienate children from their teachers and peers.


Being a fly on the wall

We were super lucky that the teachers and therapists at the Lionheart school were so supportive. We went to the school a couple of times every week, and sat in on their classes, activities and therapy sessions. We also conducted a lot of interviews with the teachers and some students to understand what made them tick. The initial response was positive – most students took to technology enthusiastically.

The ethnographic research led us to our main design concept: we would recreate a storybook read-aloud experience, using the Kinect to engage the students and immerse them within the story.

The teachers already used a lot of props, pictures, singing and acting to help the children visualize the story, and we wanted to capitalize on this. Stories were a big part of their daily routine at the school – the teachers used story-time to encourage the students to answer questions, develop empathy, listening and language skills, improve attention span and develop their curiosity and creativity.



The final application let the teachers narrate the story of Jack and the Beanstalk to a child, who stood in front of a large screen and a kinect. The screen showed each page in the storybook, and the teacher could flip to the next page using a remote.

  1. Some pages were in the form of a black outline of the pictures, and the screen had four colors that the child could “pickup” and color the pictures.
  2. Some pages had sound effects that the teachers could trigger, which they typically used in order to encourage the child to answer questions.
  3. Some pages had missing objects on the page, and the screen showed a few options. The child could gesturally drag and drop an object onto the page in response to the teachers’ questions.
  4. On a couple of pages, the child could actually control the character on screen through their movements. For example, when Jack was running away from the Giant, the teacher would encourage the child to make a climb down motion, and the faster the child “climbed down”, the faster the little Jack moved down the tree in the picture.


dragdropfinal1 dragdropfinal2

We coded everything in C# using the Kinect sdk, which was surprisingly easy to do. The more difficult parts were figuring out which gestures are intuitive for each of the actions and customizing the gesture recognition to work well with children. We did a lot of iterative testing and designing with the teachers and students at the Lionheart school and got a lot of excellent feedback.

Simple changes such as changing the color of the little “palm” icon to the currently picked color made it a lot easier for the children to color in the pictures. We also created a rectangle on the carpet with some tape within which we wanted the children to stay so that the Kinect would identify them correctly.

The highlights of the experience

The students all really enjoyed the experience, especially the climbing up and down part. In fact, when we left the system on and running, we observed some of the older children narrating the story to some younger children – a great win!

The iterative design and testing we conducted showed that interactive approaches to storytelling hold a lot of promise. Some further approaches we would like to study include:

1. Adding support for multiple students to interact with the story at the same time – maybe co-operative activities that influence the story.

2. Creating an application for remote storytelling incorporating video and audio chat.

We won the first prize in the “Health” category at Georgia Tech’s Convergence Innovation Competition. We also got covered by 11-alive, a local tv news channel.