Tag Archives: Science in Society

Science is grey, not black and white.

tl;dr Blaming scientists for the knowledge they produce is a pointless task fraught with insoluble problems.  It will work better if we police the use of knowledge rather than its production.

Over at TheScientist, Heather Douglas has put forth her opinion on the moral responsibility of scientists in a piece entitled “The Dark Side of Science”.  Her argument is fairly straightforward:  science can have bad consequences, and if we as scientists can foresee those consequences, we have a moral responsibility to not do the research or to hide the results of this research away.  She says:

 As scientists develop ways to generate sequences of base-pairs ever more cheaply and efficiently, the opportunity for the malicious or the simply unreflective to play with pathogens to see what kind of traits arise looms larger.  And it is not just technological know-how that can be problematic.  The detailed knowledge of cellular or genetic functioning can have worrisome implications as well.  Knowledge of what makes a virus more transmissible can assist us in detecting when a virus might be more prone to producing an epidemic, but it could also be used to make viruses more virulent.

In sum, scientists are responsible for both what they intend to achieve and that which is readily foreseeable, as we all are.  There is nothing inherent in becoming a scientist that removes this burden of responsibility.  The burden can be shared—scientists can come together to decide how to proceed, or ask for input from ethicists, social scientists, even the broader public.  Alternatively, scientists could decide (and it has been proposed) that some forms of regulation—either in the selection of projects or in the control and dissemination of results—be imposed on the field of synthetic biology, to reduce the risks.  The more oversight scientists submit to, the less responsibility they bear, but it comes at the cost of the freedom to choose the type of work they can do and how they do it.  This is the essential tension:  as long as there is freedom of research, there is the responsibility that comes with it.

Well, now.
There’s two big problems with this piece.  The first hinges on the idea that we can control the production of knowledge, as though it were a tap that only we have our hands on.  My comment on her piece was biting, but I’m not particularly inclined to apologize for that:
Ah, yes, because if we decide not to do research due to the possibly dangerous side-effects, that means that no one else can discover that knowledge either!  Whew, what a relief.  It would be terribly frightening if the scientific method were available to just *anyone*, lab researcher or terrorist bent on biowarfare alike.
Even putting aside the problem of intention vs outcome, as Brian mentioned, I would refer you to the computer security industry and the disastrous failure of security through obscurity policies.  In the days of $1000 genomes and DIYbio, the idea that we can stuff the genie back in the bottle if we just cover our ears and loudly declare “Nah nah nah nah, not researching that!” is ludicrous.

Science is a distributed process, and knowledge is out there for anyone who cares to look for it.  If I “foresee” that researching zebra finch song production (as an example of research I don’t do) will lead to bioweapons production in the Middle East, I can … what?  Send out a strongly worded email to every lab working on zebra finches across the world and hint that zebra finch songs hold the key to the apocalypse?  Secretly poison every zebra finch in existence to stop research on this ubiquitous lab animal?  I suppose that it’s not entirely impossible that I could convince every legitimate scientific lab on earth to stop working on zebra finch songs – I’d have to schlep all over the world to do it, and somehow convince every PI working on it to stop (academic cat herding, anyone?) – but even if I did, what then?  Zebra finches aren’t that hard to find or work on, so stopping research on this bioweapons outcome would somehow require forcing mainstream labs across the globe to maintain absolute secrecy while also forcing them not to work on this problem.  And this all depends crucially on human beings not being curious. That hasn’t worked out very well in the past.

The issue is that there’s no single body that is responsible for science, and no way to control the production of knowledge.  This works in our favour when it comes to religious groups and other nutjobs who wish to squash the advancement of human knowledge;  it doesn’t when we would like to put the cat back in the bag.  So, in this regard, Douglas’ argument is finished before it begins, because what she is asking for is impossible on the face of it.  It might be possible to slow things down, and hiding details may be a responsible thing to do in some circumstances (though I’ve yet to see a convincing case for this), but if the knowledge is valuable it is free for anyone to find.

The second problem with Douglas’ argument is the idea of “foreseeable”.  She’s rather vocal about this:

Many of the comments seemed to miss the key point that scientists are responsible for foreseeable consequences only– and I would want to restrict that to specifically and clearly foreseeable consequences.  This bounds the scope of responsibility, and in the way that is done in everyday life (and in courts of law– the reasonable person standard for foreseeability is the common standard for issues of negligence).

This acts in both the past and the future. In the past, retroactively deciding what was or was not foreseeable in the production of human knowledge is going to be an impossible task.  Science is, by definition, the process of creating knowledge and understanding that does not yet exist;  how that will interact with the body of knowledge we currently have and the body of knowledge that we have yet to produce is not foreseeable by any reasonable person.  No reasonable person has a system for deciding what will be the outcome of research – if we did, we wouldn’t need to do the research.  The reasonable person standard doesn’t apply here, because by definition you are retroactively assigning probabilities to events that effectively had a uniform distribution (when you produce the research, all things are possible).  Oh, but doing research into virulence is more likely to produce bioweapons, you say!  Really?  Or is it just as likely to produce a way to stop them (more below)?  Note that I’m speaking about basic research here, not applied research directly focused on producing weapons.  If your intention is produce a weapon, you bear the responsibility of creating that weapon;  if your intention is create new knowledge, assigning responsibility for what is done with that knowledge is a lot less clear.

In short, I would argue that “specifically and clearly foreseeable” is a red herring;  my argument is that producing new knowledge always has consequences that are not specifically and clearly foreseeable.   She uses a legal definition that doesn’t apply, and then wants us to abide by it. Ah, you say, but some consequences of research are easily foreseeable:  if you produce a new viral strain with higher virulence, then a bioterrorist could use that to infect a population and kill millions.  But my response is now in the future:  unintended consequences cut both ways.  Producing a viral strain with higher virulence could be the thing that cracks the problem and lets us produce a vaccine or cure for the virus in question;  only by understanding how something works can we stop it.  And if producing that knowledge could have saved millions of lives but Douglas stops it, does she accept responsibility for the deaths?  If millions die and someone later uses that viral strain to produce a cure, do we get to punish Douglas because she stopped us from finding that cure when it was “specifically and clearly foreseeable”?  After all, I’m a reasonable (and reasonably educated) person, and I can conceive of many ways that such research could have helped.

Herein lies the core of the problem:  science is grey, not black and white.  There’s a lot of science that has both good and bad applications.  The laws of classical mechanics apply to nearly every scale of our daily life and make it better in many ways, from the curve of a thrown ball in sports to the launching of satellites into space, but they have also produced nearly all of the tools of conventional war (bullets to bombs).  The discovery of the structure of the atom has left us with nuclear power and nuclear weapons.  Douglas might say that I’m misconstruing her argument, that it’s only “specifically and clearly foreseeable” if it’s on a small scale and really obvious.  But again, I say that this is misdirection:  show me basic research that wasn’t intended to produce a weapon with unambiguously negative consequences.  And when there’s both good and bad, black and white, how do we decide what shade of grey is acceptable?  If it could kill millions (atomic weapons) but potentially provide power enough to help stop climate change (which could potentially impact billions), who gets to decide what is acceptable before we research it?

The fact is, we can’t stop knowledge from being produced.  But knowledge has no moral value or valence in and of itself;  knowing how to make a gun doesn’t force you to construct one and shoot someone with it.  This analogy is apt in another way:  I’m greatly in favour of gun control, and my belief from reading and thinking about the issue is that people cannot be trusted to use guns properly without regulation.  But in scientific terms, this means that we regulate the application of knowledge, not its production.  We don’t tell scientists what they can research, we as a society decide what we’re going to do with what comes out of research.  What I’m proposing is more difficult than what Douglas is proposing, because we don’t get to just cover our eyes and pray that no one else is smart enough to figure out what we’d rather keep hidden.  In my way we have to have hard conversations about how best to deal with and regulate the knowledge when it arrives.  And I’m sorry if that scares her, but that’s the way it’s going to have to be.

Tagged ,

The creationist is still in charge…

Stephen Harper, Canadian politician

This guy isn't helping. Image via Wikipedia

In the cabinet shuffle being reported by CBC today, it seems that Harper declined to make a change in the minister responsible for science, Gary Goodyear.  You may remember Goodyear as the guy who “won’t confirm his belief in evolution” (as if it matters what he believes on the topic);  David Ng has a great piece at Discovery about how the Harper government – with Goodyear at the tiller – is kicking the beejezus out of science in Canada.

I spoke to my Master’s advisor the other day, and in the conversation he mentioned that he has to renew his grant this year and that he’s concerned about it.  Truthfully, reading these articles, I am too.  Thankfully, I’m moving to Australia where they seem to have money for science right now;  I’ll come back when (if?) Canadians elect a government that gives a damn about basic research.

Tagged , , , , ,

Beautiful data isn’t always beautiful.

Photo by net_efekthttp://flic.kr/p/7Z1NF2

I’m reading the book Beautiful Data (edited by Toby Segaran and Jeff Hammerbacher), which is a collection of essays by people who work with data on “beautiful” data – data that tells a story, pretty ways to present data, smart ways to manipulate and analyze data, and the like. It’s a good book, though as with all edited collections some chapter are better than others.

But the beginning of one chapter jumped out at me, and I think it’s an observation that bears repeating:

(From Chapter 12: “The Design of Sense.us”, by Jeffrey Heer)

I must confess that I don’t believe in beautiful data. At least not without context.

Prior to World War II, the government of the Netherlands collected detailed civil records cataloguing the demographics of Dutch citizenry. A product of good intentions, this population register was collected to inform the administration of government services. After the German invasion, however, the same data was used to effectively target minority populations (croes 2006). Of the approximately 140,000 Jews that lived in the Netherlands prior to 1940, only about 35,000 survived.

Though perhaps extreme, for me this sobering tale underscores a fundamental insight: the “beauty” of data is determined by how it is used. Data holds the potential to improve understanding and inform decision-making for the better, thereby becoming “beautiful” in action. Achieving value from data requires that the right data be collected, protected, and made accessible and interpretable to the appropriate audience.

As a group, scientists are among the biggest producers of data on Earth (the biggest, depending on your definition of “scientist”), and though we sometimes like to believe that we are producing this data in a moral vacuum of pure knowledge-seeking, that is a thin fantasy stretched over a harsh reality. The reality is that our data has consequences, and it will be interpreted and used to make decisions and inform opinion whatever we think about the matter. From climate change, to evolution versus creation, to green power generation – we create the data that fuels the debates and policies that shape our world.

My words might be taken as a thinly-veiled suggestion to censor the data we produce, but it’s not possible to be further from the truth. The truth is that because we produce the data, we have the responsibility to interpret it, to make it beautiful for those who need to use it and understand it. We have to make sure that our data is not used in an ugly way. I’ll admit it: a call for scientists to be more active in making their data understandable is hardly novel. But like Jeffrey’s observation above, it bears repeating.

Tagged ,

And the lights are back on.

Well, I finished with my exam this evening and sent it off.  We’ll see how that goes – I still have to put together and do a presentation in two weeks time.  But since I’m back, in the mean time I thought I’d take the time to shift gears and do some writing again.

And one of the things that caught my eye yesterday was this piece by Satoshi Kanazawa over at Psychology Today, fetching entitled “What if it turns out the Earth was flat after all?” Kanazawa is lamenting the rise of the Freudian explanation for homosexuality in the 1960s, which displaced nascent explanations based on genetics and in utero development.  In his last paragraph, Kanazawa writes:

What happened?  How did we go wrong?  How could scientists in the early 1960s abandon (what we know today to be) the true theory of male sexual orientation for such Freudian nonsense?  In 1966, I was in kindergarten; I was too busy writing a (not terribly original) sequel to 101 Dalmatians to stay abreast of the cutting-edge frontiers in sex research.  (I also believed that girls had cooties, so I would not have made a good objective scientist then.)  But if this kind of reversal of knowledge can happen, if scientific knowledge is not cumulative but cyclical, as sociologists and philosophical conventionalists and relativists would have you believe, then how can we trust any of the knowledge that we produce?  How do we know, for example, that the earth is not flat after all?  We once believed that the earth was flat, but the notion was abandoned in preference for the new idea that the earth was round.  How do we know that, at some point in the future, it will not turn out that the earth was flat after all, as the ancients always believed?

Kanazawa makes two surprisingly basic errors here.  The first is exceedingly obvious:  the ancients didn’t believe that the world was flat.  Eratosthenes, for example, had already worked out an estimate for the circumference of the Earth as far back as 240 B.C.  (I’m nearing the end of a great book that deals with the way our view of the Earth has evolved over the centuries;  I think that Kanazawa would find it an enlightening read).

His second error is a little less obvious, but much more important.  The error is committed in the last paragraph, but it’s set up in the first:

Science is a cumulative endeavor.  We build on past knowledge to attain even greater knowledge than before in a progressive manner.  Unfortunately, however, science doesn’t always work as it should.

The view that Kanazawa puts forth of science, as a straight forward and exceedingly linear progression, is fairly simplistic.  Reality is a bit more complicated:  science progresses, and it will always move forward (eventually), but the speed and direction of the movement of scientific knowledge is dependent on evidence.  When evidence comes quickly and in large amounts, science moves forward quickly and with increasing accuracy.  When evidence is scant or non-existent, scientific knowledge can meander back and forth or even double back.

And it is this context that Kanazawa misses.  In the 1960s, genetics was still a relatively young empirical science, and the evidence concerning possible genetic underpinnings to homosexuality was still weak, at best. With little to no evidence allowing scientists to make a selection between two competing theories, it is no surprise that the scientific community could be dragged off-course.  And, of course, when the evidence began to accumulate for a more complete view of the origin of homosexuality, one based on a better understanding of the interaction between genetics, development, and environment – well, the scientific community adjusted course appropriately.

As to the notion that scientific thought is cyclical, well, I don’t have a lot of time for that nonsense.  People who believe that science is cyclical or just a matter of convention need to stop using antibiotics, using their computer, wearing modern clothes, eating food, … well, my point should be obvious.  Science isn’t infalliable by any means, but the community is not going to suddenly ignore mounds of accumulated evidence and adopt a false belief.  It may go astray when there is no evidence to lead it by the hand, but there is little need to worry:  the world is still round.

Tagged