Prototyping Studio, MHCI+D, University of Washington
Crime data is notoriously problematic to work with. When working with any data set, one of the first steps should be thinking critically about its biography. As Catherine d’Ignazio explains, data appears to be straightforward information but is always separated from the context of its collection. With crime data then, we face issues of over-policing, regional priorities and dispatch schedules, issues around reported crimes, as well as inconsistent practices and laws across municipalities.
Nontheless there is increasing interest in crime data with more cities publishing open data sets and applications integrating it (often without thoughtful analysis). With Route Aware, we attempted to design a mobile application that matched a user’s mental model of their personal safety with crime data. Our prototyping of this application did help us engage with mental models around crime data. However, designing interactions from critically biased data sets without a careful analysis of how the data was collected, and who might be harmed by it, is ultimately too risky to recommend.
We designed an application concept that would integrate crime data into Google Maps. The application would allow users to set their own crime routing preferences and local (Seattle) crime data could be used to plan a route.
The application could also be turned to “Aware Mode,” providing discreet vibration feedback when the user enters a high crime area so that they could be notified without needing to look at a device screen.
We approached this project through iterative design. Validating ideas quickly seemed especially relevant when balancing a potential user desire with the ethical considerations of neighborhood stigmatization.
In our user evaluations, we focused on Route Aware’s usability and desirability by testing:
1. Paper prototypes
2. An interactive prototype
3. A haptic/ vibration prototype
We understood that our proposed application raises issues of neighborhood stigmatization. Areas with high crime may also be impoverished areas comprised of members of historically marginalized communities. We wanted to match city‐dwellers desire for extra knowledge around safety while mitigating neighborhood stigmatization to the greatest extent possible. Through the design process, it became evident that it would be challenging to balance.
We felt the on-boarding process was key and hypothesized that we could guide users into setting preferences that both kept them safe and lessened bias. We created two on-boarding processes using paper prototypes to test our ideas.
Difficulty Categorizing Crimes
The first prototype (far left) categorized crimes into two categories (“violent crimes” and “nonviolent crimes”). Our usability testers both stated that they wanted to be able to select the crimes they would like to be notified of from a list, rather than having to choose an overall category.
The second prototype (second from left) categorized crimes into three alert categories (“high,” “medium,” and “low”). Our usability testing unearthed many issues inherent in this approach, most of which related directly to the language used within our prototype, as well as our decision to place crimes on a spectrum of severity.
We decided to forego “abstracting” crimes by providing users with only high-level crime categories.
In the interactive prototype, crimes now fell under two categories, “violent crime” and “nonviolent crime,” but were explicitly listed. In an effort to mitigate neighborhood stigmatization, we referred to the Seattle government’s crime data categories and only included the crimes we felt were relevant to walkers or bikers; for example, crimes revolving around property vandalism did not seem as relevant and were omitted.
Our next round of testing (above, right), revealed that our text was too small and too long on several screens, but there did not seem to be any confusion over selecting crime types individually. We hypothesize however, that users may simply “select all” crimes thereby alerting themselves to many crimes that might not have any effect on their route safety. This indicates a ‘better safe than sorry’ approach which may cause some users to avoid neighborhoods all together.
Unable to remove bias
Ultimately, though we were able to experiment with users’ mental models around crime with these two prototypes, we believe crime mapping too often would support and extend a users’ fear of already stigmatized neighborhoods, promoting discrimination.
We designed a vibration prototype using the iPhone’s custom vibration options. After looking at research on haptic vibration feedback, we created two unique vibration patterns in two speeds. We broke the patterns into two groups (A and B) and asked 15 participants to tell us which pattern conveyed a sense of urgency.
Vibrations 2 and 3 were deemed the most recognizable and urgent, while 3 reminded several people of a song.
We designed Route Aware to integrate with Google Maps, therefore we shaped our icon and screens for the company’s Material Design standards.
A prototype of the onboarding experience, from activation in the menu, to the introduction of the Aware Mode feature button. This animation was created using Principle.
The icon had to convey both “protection” and “notification” and visually fit within the Google Maps interface as a button. I modified a security icon used within the design standards and added equally scaled curves to visualize a vibration.
Overall, data sets around crime are highly problematic and mapping them can embed bias against historically marginalized groups into the experience of neighborhoods.
Our prototyping of this application did help us engage with mental models around crime data. However, the mental models of our participants seemed to over-index on safety, selecting any and all categories in the data set.
© 2019 Julie Sutherland UX IxD