Social Justice & Technology

Every day technology makes life easier for millions of people, and in doing so makes life harder for others.

Adam Gopnik, in his New Yorker Article, “The Information: How the Internet Gets Inside Us,” breaks down the population into three groups:

call them the Never-Betters, the Better-Nevers, and the Ever-Wasers. The Never-Betters believe that we’re on the brink of a new utopia, where information will be free and democratic, news will be made from the bottom up, love will reign, and cookies will bake themselves. The Better-Nevers think that we would have been better off if the whole thing had never happened, that the world that is coming to an end is superior to the one that is taking its place, and that, at a minimum, books and magazines create private space for minds in ways that twenty-second bursts of information don’t. The Ever-Wasers insist that at any moment in modernity something like this is going on, and that a new way of organizing data and connecting users is always thrilling to some and chilling to others—that something like this is going on is exactly what makes it a modern moment.

Anyone who has read this blog or heard me speak would have me pegged as a Never-Better, and that is pretty close to the truth. I do think that we live in an era that rivals that of the printing press, with its subsequent explosion of literacy and education. In my lifetime I have already seen a startling collapse of time and space due to how the internet and other technologies have allowed us to traverse great geographical distances in seconds. From my home I can bank, buy, and sell. I provide therapy and consultation to people as close as my city and as far as Singapore with little to no noticeable difference. And when I want to relax I join colleagues and friends in a virtual world that has denizens from Australia, the UK, and Asia.

And yet, as much a Never-Better as I am, I have noticed how social justice continues to lag behind. Not in the technology, but in both the access to it and fit between human beings and the systems they are in. Technology, as always, has advanced beyond our ability to master it, think critically about it, and perhaps most importantly achieve equity with it.

Let me give you an example I have experienced fairly recently in how the technology that benefits me has put others in my own social sphere at a disadvantage. I have an iPhone App, courtesy of a nameless coffee vendor, that has allowed me to use my iPhone to pay for my daily coffee with the flash of a barcode. My local barista rings me up, scans my iPhone, and the transaction is finished. At first, as an early adopter, I was one of the few folks using this in the Cambridge area, but more and more people are taking advantage of this App, and it is now commonplace in Austin, TX and Silicon Valley.

The problem with this App is that is financially disadvantages the baristas. There is no functionality as of yet in the App to allow for adding a gratuity, and since technology has worked all too well in eliminating the need for paper currency, I rarely carry any money with me to add a gratuity. When I initially became aware of this, the temptation was to slink away from the register as quickly as possible, and if I didn’t have ongoing relationships with the baristas I might easily have done so. But instead I asked them if they had noticed a drop in gratuities since the App became prevalent, and they remarked that they had. So what has been a convenience for me has significantly reduced the regular income of others.

This may seem a privileged example, and a minor one, but that is in fact one reason that I am mentioning it. Every day, through these minute transactions, we are influencing the lives of others, often without thought. The trope of the machine replacing the worker is in fact an industrial one: Each day, a section of our population does basically the same work they did a decade ago, but technology has made it easier to overlook and underpay them. And for that to change, we need to notice the behavior, and then, I suggest, address the technology.

There is a shortfall between lived experience, social justice and technology occurring on a microscopic level in the US, and part of why we all need to become more digitally literate is so that we can advocate on behalf of under-served and marginalized populations for technology to improve their lives. Avoiding technology is not the answer. Slinking away from the register is not the answer. The answer, in part, is to contact the company in question and suggest adding features to the technology to bridge the gap. In this case, I’m contacting the nameless coffee company suggesting they add a feature in either the App-user interface or the register-barista interface to allow for the inclusion of a gratuity. Seems like a simple fix, but as someone who owns and works in a company that creates customizable features I can tell you that they are expensive, and therefore often not made until somebody requests them.

In terms of world equity and technology we have an even greater challenge, namely, access. More than 81% of people in the US have some form of broadband internet access, as compared to approximately 5% of the African continent. 1 out of 3 people in the US have internet speeds 10 Mbs, as opposed to 0 in Ghana, Venezuela, and Mongolia.

Recently I had the opportunity to participate in a game developed by Jane MacGonigal at SXSW, which she claims will have have boosted my resiliency and hence extended my life by 7-8 minutes after playing it just once. I believe her. Which makes me think it is all the more important that we find ways not only to create games where people in the developed world learn about developing countries; but help people in developing countries access and develop their own video games. With all of the great work being done in the US and Europe on socially serious games, and games for health, we are seeing how video games can increase resilience and learning skills. How can we use these technologies to bring about similar changes in less affluent countries and populations? Because if playing a video game could help us crack the eznymatic code of HIV, which 1.2 million people in the US live with, what about playing a video game to increase resilience in Subsaharan Africa, where 22.9 million people live with it?

I think it is also imperative that people in developing countries have access not only to playing video games, but creating them. If they don’t, then the same cultural colonialization that has happened in the past will repeat itself. We need to support social justice in such geeky and subtle ways as making sure that indigenous cultures all over the planet have the opportunity to design games that reflect their own cultures, not a globalized McVersion of it.

Between the whittling away of a worker’s salary in the US and Subsaharan HIV are a myriad other social justice concerns, but digital literacy and emerging technologies are the threads that bind them all together. The same internet that allowed LGBT people to find each other in a hostile 20th century can be used to out them against their will today. The same social media that allows a more participatory experience can give people new avenues and amplifications when they want to harass people. The problem is not technology, but our lack of digital literacy. And by “our” I mean the individual you and me. Because corporations and governments are making it their business to learn how to master technology and its power even while we debate whether it was Better-Never or Never-Better.

I’ve often said on this blog that if you want to run a private psychotherapy practice in the 21st century you cannot ignore technology. Now I’m upping the ante, and saying that if you want to be a socially just human being you cannot ignore it. We need to learn how emerging technologies work and how they don’t. We need to identify the slippages between human systems and the technologies that convenience some at the expense of others. We need to see the internet as an infrastructure necessary to make the developing world as viable as the developed. And we need to understand how digital literacy can empower us before someone takes that power away.

Like this post? There’s more where that came from, for only $2.99 you can buy my book. I can rant in person too, check out the Press Kit for Public Speaking info.

I Come To Praise First-Person Shooters, Not To Bury Them

 

I should begin by saying that I don’t personally enjoy the type of video game known as a first-person shooter (FPS) very much.  They make me jittery when I play, and I am easily overwhelmed by them.  I’m still stuck in the tutorial room with Jacob in Mass Effect 2.  If there are settings to disable gore and swearing on a game I’ll click ’em.  But as I looked back on my past posts I realized that I have neglected to weigh in on FPS, and in doing so am guilty of the same kind of dismissal I critique in colleagues.  (Note to gamers: I know there are several important distinctions between FPS and TPS or third-person shooters, but that’s for another post.)

There’s a lot to like about FPS games, and here’s a few examples.

  1. Many FPS such as Halo 2 can be collaborative as well as violent.  Players join platoons and need to learn how to coordinate, communicate and problem-solve in a fast-paced environment.  Games like Halo also provide environments for players to learn how to assume leadership roles, follow directions from other players, and think critically about stressful in-world situations.
  2. FPS encourage impulse control as well as aggression.  Crucial to success in FPS games is the ability to time attacks and maneuvers.  This requires the ability to control the impulse to pull that trigger.  Although we tend to focus on the aggression in FPS, there’s often a lot of sneaking going on as well.  In Bioshock there are actual decision points in the game where refraining from killing characters changes the entire outcome of the game.  Even though the player is not learning teamwork in single-player games, they are often learning the same sorts of forms of decision-making and impulse control in good old-fashioned “Red Light, Green Light.”
  3. First-person shooters improve hand-eye coordination.  One important component of hand-eye coordination is visuospatial attention.  Research by Green et al. suggests that video games improve visuospatial attention, and further that FPS video games do it even better than games like Tetris.  Hand-eye coordination is a skill most of us would agree is a good thing to have.  It helps improve your readiness to learn, increases your ability to excel at sports, increases your confidence and makes juggling less stressful.
  4. First-Person Shooters may increase a sense of mastery and alertness.  So many parents and educators lament how children aren’t able to pay attention.  And yet, what makes FPS games so compelling is their immersive quality.  As Grimshaw et al. discuss, the literature describes immersion in varying ways, such as ‘the state of consciousness when an immersant’s awareness of physical self is diminished or lost by being surrounded in an engrossing total environment, often artificial’  Further, in order to be completely immersed in an environment, “players ‘must have a non-trivial impact on the environment.”  Wandering around the game world may not be sufficient to immerse players into a flow-like state, and shooting people, whatever else you may say about it, does not lend itself to feeling trivial in an environment.  Imagine if classrooms could harness the ability to create such immersive qualities in the classroom.  Much more effective than saying loudly, “Pay attention!” which usually has the exact opposite effect than the statement intended.

Given the above compelling reasons to think well of FPS, why are they so often singled out as the bad seed of video games?  The answer, I would suggest, is a sociopolitical one that gamers as a whole ignore at their peril.

Science is often, maybe always, political, and has an uneasy relationship with civil rights movements.  The example that springs to my mind is the LGBTQ civil rights movement.  Back when a preponderance of science was pathologizing of all LGBTQ people, there was a more predominant solidarity amongst the various thinkers, activists, and citizens of those subcultures.  From Stonewall up through the early AIDS crisis, there was less fragmentation and more coordination, with the understanding that civil rights benefited everyone.

But within the past two decades, many members of the LGBTQ community have begun to receive recognition and acceptance in society as a whole.  At this writing 7 states have legalized gay marriage (Welcome Washington!) and more accept domestic partnerships between same-sex couples.  Bullying based on sexual orientation and hate crimes have received more coverage from media with sympathetic stances towards LGBTQ youth.  And I can’t remember the last time I heard talk about the latest study locating the “gay gene.”

And yet, science and politics have turned their gaze towards specific subsets of the LGBTQ population.  Transgender rights (a notable recent gain in my home state) are still ignored or reduced to bathroom conversations and debates about the poor parenting of those who don’t make their children conform to Cisgender norms.  The status of LGBTQ youth of color as a priority population is met with grumbling.  Bisexuals are still considered in transition or confused, asexuals frigid or repressed.  Polyamory is confused with lack of commitment or neurotic ambivalence, and BDSM isn’t even recognized as worthy of any sort of advocacy.

And to a large extent, whenever one of these specific subcultures are targeted, the other factions of the LGBTQ community remain silent.  And in doing so, they become allied with the perpetrator.  As Judith Herman points out in her seminal work, Trauma And Recovery, “It is very tempting to take the side of the perpetrator. All the perpetrator asks is that the bystander do nothing.”  This is exactly what members of the LGBTQ community are doing when they cease to maintain the solidarity and mutual support that helped get homosexuality removed from the DSM-III.

And so the focus shifts from the general “gay people are bad/sick” to the more specific populations also under the LGBTQ umbrella, and rather than fighting for them we allow them to be omitted from civil rights.  A case in point was made by openly trans HRC member Donna Rose, when she resigned in protest to HRC supporting an Employment Non-Discrimination Act which included sexual orientation but didn’t include protections for transgender people.  A group may only be as strong as its weakest member, but solidarity often ends when the strongest members of an alliance get what they want.

The gaming community would do well to take a lesson here.  Recently video games have been getting increasing recognition as an art form, an educational tool, and possible solution to world problems ranging from poverty to AIDS.  As society moves to a more progressive stance on technology and video games, studies come under scrutiny for their sweeping and pathological generalizations of a complex and diverse group.

(The most pernicious example of this in my opinion is the concept of the “screen” and “screentime.”  Studies ask questions about how much time subjects spend in front of electronic devices, as if all activities were identical in experience and effect.  Watching television, playing video games, surfing on Facebook are all treated as similar neurological phenomenon, when they aren’t.  It’s much more complicated than that, and different physiological systems are affected in different ways.  Even the idea that all screen time dysregulates sleep the same way is being questioned recently, with televisions showing less repression of melatonin than iPads.  So what screen you’re doing things on makes a difference.  And then there’s what you are doing.

Watching television is a more passive and anergic activiy than playing video games in my experience.  No, I’m not going to cite a particular study here, because I want us to focus on thinking critically about the designs of studies not the data.  And as Paul Howard Jones points out in his video, learning itself activates different parts of the brain at different phases of the individual’s learning cycle of a particular activity.  So yes, video game users have different looking brains than those that aren’t using them, that doesn’t mean it is bad, but that they are using different parts of their brain function and learning different things.  Most people in the gaming community would have some solidarity here with other gamers, and balk at the idea that a screen is a screen is a screen.  And “screen time” is usually implying screens watching television, playing games or surfing the net, not screens compiling doctoral dissertation lit reviews, planning a vacation, doing your homework, or looking up a recipe.)

So gamers are solidly behind fighting these blanket generalizations.  That’s great.  But I find that where gamer solidarity is starting to fall apart is around the more specific attacks that are being levied in science and politics around FPS and violent games.  Studies says these desensitize children to violence, increase aggression and correlate to hostile personalities.  There are also studies that conflict these findings, but I want to ask a different, albeit more provocative question:

What’s wrong with being aggressive?

I think that child’s play has a long history of being aggressive:  Cops and Robbers, water pistols, football, wrestling, boxing, tag all encourage some element of aggression.  Most of us have played several of these in our lifetime with some regularity, have we become desensitized and aggressive as a result?  Am I sounding too hostile?  🙂

And we are sending children and adolescents a mixed message if we label aggression as all out bad.  Not everyone or every job requires the same amount of aggression.  Wanting to be #1 and competing, whether it be in a boxer or a president, requires some aggression.  Aggression is in fact a leadership quality.  It allows us to take risks, weigh the potential hazards, and go for something.  Feelings of aggression heighten our sense acumen, can speed up our assessment of a situation and help us stand up to bullies.  Whether we agree with this war or that, would we really want our soldiers to be in-country with no aggression to help them serve and defend?  Fortune, as the saying goes, favors the bold, not the timid.

FPS games have a place on the Gamestop shelf and a place in the gaming community.  They allow us to engage in virtual activities that have real-life benefits.  They are a focal point for millions of gamers, and I believe unlocking their DNA will go a long way to discovering how to improve work and learning environments.  Stop letting critics shoot them down, or don’t be surprised if you’re in the crossfires next.

Like this post? There’s more where that came from, for only $2.99 you can buy my book. I can rant in person too, check out the Press Kit for Public Speaking info.

A Follow Up to Dings & Grats

My last post, “Dings & Grats,” generated quite a lot of commentary from both therapists and gamers alike.  I was surprised at many of the comments, which tended to fall into one of several groups.  I’ll summarize and paraphrase them below, following with my response.

1. “I haven’t seen any research that shows video games can increase self-confidence, but I have seen research that shows they cause violent behavior.”

Fair enough, not everyone keeps up to date on research in this area, and the media certainly hypes the research that indicates “dire” consequences.  So let me direct you to a study here which shows that using video games can increase your self-confidence.  And here is a study from which debunks the mythology of video games causing violence.

2. “I find gamers to be generally lacking in confidence, introverted, reactive and aggressive, lacking in social skills, etc.”

These responses amazed me.  Gamers are part of a culture, and I doubt that many of my colleagues would say such overarching generalizations about other groups, at least in public.  Would you post “I find women to be generally lacking in confidence,” or “I find obese people introverted,” or “I find African American people lacking in social skills?” And yet the open way many mental health professionals denigrated gamers without any sense of observing ego was stunning.  I was actually grateful that most of these comments were on therapist discussion groups, so that gamers didn’t have to read them.  This is cultural insensitivity and I hope that if my colleagues aren’t interested in becoming culturally competent around gaming they will refer those patients out.

3. “Real relationships with real people are more valuable than online relationships.”

This judgment confused me.  Who do we think is behind the screen playing video games online, Munchkins?  Those are real people, and they are having real relationships, which are just as varied as relationships which aren’t mediated by technology.  Sure some relationships online are superficial, and others are intense; just like in your life as a whole some of your relationships are superficial and others are intense and many between the two.  I’ve heard from gamers who met online playing and ended up married.  And if you don’t think relationships online are real, stop responding to your boss’s emails because you don’t consider them real, see what happens.

4. “Video Games prevent people from enjoying nature.”

I am not sure where the all or nothing thinking here comes from, but I was certainly not staying that people should play video games 24 hours a day instead of running, hiking, going to a petting zoo, or kayaking.  I know I certainly get outside on a daily basis.  But even supposing that people never came up for air when playing video games, I don’t think that would be worse than doing anything else for 24 hours a day.  I enjoy running, but if I did it 24/7 that would be as damaging as video games.  What I think these arguments were really saying is, “we know what is the best way to spend time, and it is not playing video games.”  I really don’t think it is our business as therapists to determine a hierarchy of leisure activities for our patients, and if they don’t want to go outside as much as we think they ought to, that’s our trip.

5. “I’m a gamer, and I can tell you I have seen horrible behavior online.”

Me too, and I have seen horrible behavior offline as well.  Yes, some people feel emboldened by anonymity, but we also tend to generalize a few rotten apples rather than the 12 million + people who play WoW for example.  Many are friendly or neutral in their behavior.  And there is actually research that shows although a large number of teens (63%) encounter aggressive behavior in online games, 73% of those reported that they have witnessed others step in to intervene and put a stop to it.  In an era where teachers turn a blind eye in”real” life to students who are bullied or harassed, I think video games are doing a better, not worse job on the whole addressing verbally abusive behavior.  Personally, I hate when people use the phrase “got raped by a dungeon boss,” and I hope that people stop using it.  But I have heard language like that at football games and even unprofessional comments at business meetings.  I don’t think we should hold gamers to a higher standard than anyone else.  Look, we’ve all seen jerks in WoW or Second Life, but we’ve seen jerks in First Life as well.  Bad behavior is everywhere.

6. “Based on my extensive observations of my 2 children and their 3 best friends, it seems clear to me that…”

Ok, this one does drive me nuts.  If you are basing your assertions on your own children, not only do you have a statistically insignificant N of 2 or so, but you are a biased observer.  I know it is human nature to generalize based on what we know, but to cite it as actually valid data is ludicrous.

7. “I think face to face contact is the gold standard of human contact.”

Ok, that’s your opinion, and I’m not going to argue with it.  But research shows that it is not either/or, and the majority of teens are playing games with people they also see in their offline life.  And let’s not confuse opinion with fact.  You can think that video game playing encourages people to be asocial, but that is not what the research I’ve seen shows.  In fact, I doubt it could ever show that, because as we know from Research 101 “correlation is not causation.”

By now, if you’re still with me, I have probably hit a nerve or too.  And I’ve probably blown any chance that you’ll get my book, which is much more elaborate and articulate at this post.  But I felt compelled to sound off a little, because it seemed that a lot of generalizations, unkind ones, were coming out and masquerading as clinical facts.  Twenty-First Century gaming is a form of social media, and gamers are social.  What’s more they are people, with unique and holisitic presences in the world.  I wasn’t around to speak up in the 50s, 60s and 70s when therapists were saying that research showed all gays had distant fathers and smothering mothers.  I wasn’t around when mothers were called schizophrenogenic and cited as the cause of schizophrenia.  And I wasn’t around when the Moynihan Report came out to provide “evidence” that the Black family was pathological.  But I am around to push back when digital natives in general and gamers in particular are derided in the guise of clinical language.

To those who would argue that technology today is causing the social fabric to unravel, I would cite a quote by my elder, Andy Rooney, who once said, “It’s just amazing how long this country has been going to hell without ever having got there.

Like this post?  There’s more where that came from, for only $2.99 you can buy my book.  I can rant in person too, check out the Press Kit for Public Speaking info

How To Learn About Video Games & Why You Ought To

http://www.youtube.com/watch?&v=JY8h-U7rE6Y

What Does Gamer-Affirmative Therapy Mean?