The AirBnB scandal during the summer of 2011 offered a grim look at what constitutes a “bad user” and why they can be so worrying for small businesses.
In this case, one bad user compromised the value of an entire community. No stream of user metrics could assuage the concern and loss of trust AirBnB suffered. Potential users were left to pause at a central tenet of the business: how many of the users are this bad and how can I know the difference?
The technology world needs to take this issue more seriously. To date, even the most exceptionally social of software – Facebook and the social media crowd – refuses to give any sort of qualifiers to the term ‘user’. There are no working ideas of ‘good’ users and ‘bad’ users. At best, there are “power” users who engage with technology so far above the average, they become statistically visible.
Without qualifiers or nuances, the term user becomes vacuous. All we know about the user of a technology is that they use it. This is hardly a helpful designation.
The word user has at least one negative meaning in the English language. It has to do with drugs. Yet the tech world has never minded this dependence metaphor. In fact, its addicting and enveloping products, ported out to our phones and televisions, have made dependence inevitable.
We are all users now. An entire set of society, across class and cultural boundaries, has been homogenised as tech product dependents, redefined by our relationship to servers, screens, and other entities. Our digital lives are parsed by drop-down menus and text fields. We must now find ways within these coded protocols to ‘write ourselves into being,’ as sociologist Danah Boyd suggests, echoing Ben Johnson’s dictum on speaking.
Media output from users is called content. Our behaviours are deemed interactions, suggesting a fixed set of potential actions in which every user behaviour has already been imagined by software design. Users inhabit communities made technical and explicit: they are deemed networks and mapped using “social graphs”. Without descending too much into Matrix-level paranoia, are we being shaped into software-ready agents who recognise the parameters of applications and play agreeably within a set of desired outcomes?
Well, no. Users are not really predictable agents at all. Though we have been placed in formal, coded roles, our interactions with software are often horribly chaotic. We find bugs, loopholes, problems, glitches and redundancies that can be exploited – or cherished. Users redefine the technologies they use because we have a dangerously open relationship with the code around us.
Failing to think critically about users leaves companies to assume that any user is as good as the next. It gives rise to a cult of the metric, in which the amount of users a technology has is seen as directly proportional to its “awesomeness”.
This is something that tech bloggers and mainstream journalists love to draw attention to. Instagram now has 15 million users is the accepted way of saying anything from Invest now to Mobile photosharing is really cool at the moment. But it tells you little about those users. And we never learn much about the product, either.
Instagram offers a particularly rich case study in differentiating users. This is because some Instagram users mobilise others, and help create a global network of localised Instagram meetups that connect the iPhone app users in the real world. These users have acted as evangelists and support teams for new users and potential converts.
They have helped a team of just four developers in San Francisco become one of the most creative and celebrated social media platforms in 2011. They have hosted exhibitions of Instagram content, run workshops on iPhoneography, and propelled brands and advertisers to flock to Instagram with chequebooks and campaign requests.
But the set of individuals who enacted these incredible communities are a select subset of the wider network. When TechCrunch or an angel investor reflects on the total number of registrations, they hide the super value of exceptional users among casual or one-time registrants. Even worse, they hide the concerned and passionate user among spammy and profane users who see networks, blogs, and tech products as places to attack and undermine.
Should there be a power rule for relating the anxious, bullying ‘bad’ user with the engaged, conscientious ‘good’ user? That equation would be something like 20:1.
AirBnB did not know the difference between the two, and it still does not. That is because the tech world has been preoccupied by code first and people second. The community manager hiring process always follows engineering, rather than accompanying development and product launches. But you cannot engineer good users. You can try to build platforms and technologies that encourage people to behave and to be positive.
On a photo-sharing application, with no comments and only a ‘like’ mechanic, attacking another user or disparaging their work proves difficult. It allows positive interactions or nothing at all. While users may feel that interactions are too limited, they also feel a culture of positivity consciously planned for. This design choice came from user observation and differentiation. It came from deciding to seek good users and silence or redirect bad ones.
Defining “good” and “bad” users is part of seeing the people who use technology more realistically. It provides the first steps to creating tech cultures where the types of users engaged in a product are just important as their number.
Separating good and bad users is only a rough beginning. We must recognise that there are desirable, engaged and value-adding tech users, alongside worrying, bullying misanthropes with unlimited data plans, and all the shades in between – and take account of this in our products and designs.