Can we just agree on a name?

An image of a squirrel with personality

A squirrel with personality, from jpaxonreyes (used under a CC license). Because let's face it, this post needs a picture.

Looking at the email alerts I get for new journal issues, I came across a new paper by Sih et al. in Ecology Letters [1], looking at the “ecological implications of behavioural syndromes”.  And I suppose that I could talk about the content of the paper, but what I’d rather do instead is go off on a short rant about research on this topic, as is my right as a blog writer.  What’s got a bee in my bonnet (and why am I suddenly 90 years old)?  It’s the name, “behavioural syndromes”.  It drives me mad.  I’ve seen papers refer to the topic by:

  • “Animal personality”
  • “Behavioural syndromes”
  • “Coping styles”
  • “Animal temperament”
  • “Interindividual variation” – not an SEO friendly description, to be sure.

There seems to be a political aspect to this too, but I’m not 100% clear on it.  Some themes are clear, though.  My feeling is that Sih seems to be pretty stuck on “behavioural syndromes”, while others like Denis Réale (whom I know from my Ph.D. at UQÀM) and Neils Dingemanse seem to be throwing spaghetti at the wall; after trying to introduce “animal temperament” as a thing – which, as far as I can see didn’t take hold – they had the (actually quite inspired) idea of doing an end-run around the whole thing by combining personality with plasticity and coining the new phrase “behavioural reaction norms” [3].  Only time will tell if that one takes off.

Lest you think that it’s just a name problem, it seems that confusion in the names is a symptom of deeper confusion over what they’re studying and how to study it.  Hanging around at a couple of the discussions at the last ISBE made it clear that people working in this field aren’t agreeing on the name, the definition, or the methodology (statistical or experimental).  Some of this is cause for excitement, of course:  when you’re this confused, it’s probably a sign that you’re on to something good.  And don’t think that I’m writing the area off;  there’s been a lot of exciting work in personalities over the last decade or so.  Hell, I’m trying to get a paper published on the topic myself right now.  Yet, I can’t help feeling that work in this area is going to be a little bit hamstrung until it converges on clear values for each of these things.

And honestly, I just feel sorry for the next poor sod who wants to do a literature review or meta-analysis.  So, can we just agree on a name and call it a day?


[1]. Andrew Sih, Julien Cote, Mara Evans, Sean Fogarty, and Jonathan Pruitt. Ecological implications of behavioural syndromes. Ecology Letters, 15(3):278–289, 2012.

[2]. Denis Réale, Simon M. Reader, Daniel Sol, Peter T. McDougall, and Niels J. Dingemanse. Integrating animal temperament within ecology and evolution. Biological Reviews, 82(2):291–318, 2007.

[3]. Niels J. Dingemanse, Anahita J. N. Kazem, Denis Réale, and Jonathan Wright. Behavioural reaction norms: animal personality meets individual plasticity. TRENDS in Ecology and Evolution, 25(2): 81–89, 2009. 

Tagged , ,

Tell Ontario teachers that they should ban pickles, too.

Ontario teachers:

The Ontario English Catholic Teacher’s Association says computers in all new schools should be hardwired instead of setting up wireless networks.

It also says Wi-Fi should not be installed in any more classrooms.

In a position paper released on Monday, the union — which represents 45,000 teachers — cites research by the World Health Organization.

Last year the global health agency warned about a possible link between radiation from wireless devices such as cellphones and cancer.

Some believe wireless access to the Internet could pose similar risks.

What the WHO actually says:

Are there any health effects?

A large number of studies have been performed over the last two decades to assess whether mobile phones pose a potential health risk. To date, no adverse health effects have been established as being caused by mobile phone use.

What a smart commenter (ve5cma) pointed out:

Headline should read:

“Teachers’ Union Falls for Junk Science”

Sub head:
Standing within sight of a 50,000 watt radio station transmitter, the head of the teachers’ union complained about the 4 watt WiFi router.

What I’m doing right now:

Found at

I knew that when the WHO classed cell phones as “possibly carcinogenic” (a classification so soft that it includes pickles as possible carcinogens) people would crawl out of the wood work using that as an excuse to ban anything electronic that scares them,  and wi-fi seems like an obvious target.  And let’s face it, Ontario has had problems with this before.  So I guess I can’t say that I’m too surprised something like this happened, but I sure am disappointed.  No one, including the WHO, has been able to find a link between cancer and cell phones.  So how does the Ontario English Catholic Teacher’s Association think that a radiation emitter being held against your head and failing to cause cancer is a good reason to ban wi-fi  throughout schools?   There’s no reason to believe that this kind of radiation has any effect on biological tissue  (even if it’s not physically impossible), and the available evidence is strongly against the idea.

It’s just sad that a group of people responsible for teaching science to children can fail so badly at basic scientific literacy.  For shame, Ontario English Catholic Teacher’s Association, for shame.

Update: Orac at Respectful Insolence hits the same notes with a lot more depth.

Tagged , , ,

Opening up to open access…

If you’ve been reading this blog for any period of time, you’ll know that I’m not overly political here.  That’s not to say that I’m not opinionated (I’ve got plenty of opinions), but I don’t tend to post specifically in order to weigh in on timely political topics.  However, having recently wandered into a battleground only to discover that I’ve chosen a side, it feels right to jot down a few thoughts on the topic while my blood is still simmering.

What am I talking about?  Well, I’m talking about Open Access (OA) science publishing.  If you’re reading this blog you’re probably familiar with at least the basics of the debate, but whether you are or not I highly recommend reading this parable by Mike Taylor that puts the issue in clear language, and this recent press release on Tim Gower’s blog explaining the reasoning behind the Elsevier boycott (5398 and going strong!).

I recently ended up tangling with Robert Kurzban and others over this subject over the Evolutionary Psychology blog, and in the process of joining the discussion (I’ll refer you to that post to read more;  it’s too long to reproduce here) I discovered that I do, in fact, support the ideals of OA with a strength of feeling that surprised me.  What do I believe?  My belief, upon looking at the ethics and data (what exists, anyways) behind the current journal publishing system is that there are two parts to the argument:

  1. Publishers like Elsevier employ a business model that is exploitative and verges on (is?) corrupt.  Their product is the free labor of scientists, repackaged and copyrighted with little to no added value in the internet era, and sold back to the public that paid for it and the scholars the created it at a massive profit.
  2. The second part of the argument is whether a better alternative exists, in the form of Open Access (OA) publishing.  I believe that OA is a better alternative ethically, financially, and for the betterment of science as a whole.  OA has an obvious effect on the reach of a scientific paper (as the audience is now potentially anyone with an internet connection and the interested to read it), can have a positive effect on citation, and is more accountable and transparent.  On the other hand, the models for OA are still being worked out, and although the issue of publishing fees has been overblown it is still a valid concern, as are concerns such as a lack of OA journals in one’s sub-field, issues about career advancement (maybe), and the effort to move journals to OA publishing or start new journals.

Where do I stand now?  Well, for one thing, I haven’t yet signed the Elsevier boycott.  Why not?  Because I haven’t thought it all the way through yet.  I’ve just recently sat down to get my head straight about open access and found that I have strong feelings on it, and so I’m still coming up with a way to work that into my professional life.  I don’t know yet what form that will take;  I would like to publish exclusively in OA journals, but I don’t know if that’s yet an achievable goal for me, especially as I have collaborators to consider who may or may not the same way.  I do know that I will be searching for good OA alternatives to the journals I publish (ha!) in now, and that I will be looking for ways to increase my support of the OA movement as much as I can.

And if all else fails, I’ll write some more blog posts.  That helps, right? 🙂


Sight-reading my science

Sight Reading, by skelly98; used under a CC license.

Parents often say things like “when you’re older, you’ll be glad we made you do this”.  They’re also often wrong about that, but occasionally they get it right.  In my case, one of the few things that I agree with unreservedly is that I did indeed come to appreciate the time I spent learning to play the piano.  At least, I agree in hindsight;  as a child, in the future tense, I most certainly did not experience wild joy from sitting in front of our old, battered upright piano, stuck down in the basement and banging away for hours.  In younger times, I much preferred reading to the endless repetition of scales and pieces, but this was a preference that did not endear me to my mother.  

So, I developed a compromise system that appeased the attentive ear sitting upstars and awaiting the next masterpiece in A-flat.  Lessons were held at the home of Mrs. Birch, my Jekyll / Hyde piano  teacher who was a perfect candidate for Kindly Grandmother Jekyll of the Year until a student unwittingly sat themselves down at her Yamaha baby grand and unleashed Generalissimo Hyde;  these lessons inevitably involved carting several books to and from her house, which required a dedicated book bag that sat beside the piano.  This was a perfect place to stash whatever book I was reading  while I whipped off a quick left-handed play-through of whatever Bach fugue or Mozart piano concerto I was mostly ignoring, after which I would haul out my book and greedily mow through as many pages as I felt I could get away with before the warden would get restless upstairs.  


If you don’t play an instrument, you might not be familiar with the concept of sight-reading.  Learning to “read” music, to turn the notes on the page into a series of motor commands that lead to music coming from the instrument you are playing, is an important skill for any musician to develop and sight-reading is just the logical endpoint of that skill.  Sight-reading involves playing music placed in front of you for the first time as though you’ve been playing it for years;   at least, that’s what it’s supposed to be like, but it often involves a fair bit more squinting, scrambling, and muttered cursing than you might reasonably expect, especially when some bastard just handed you Mozart’s Rondo alla Turca and you’ve never played it before.  Experienced sight-readers will also probably agree that it requires the ability to see things coming, because if your eyes aren’t a few notes out in front of the notes that you’re playing right at this moment, the next thing that you’re going to be doing is trying to extricate your fingers from whatever hellish gang-sign-slash-car-wreck they’ve managed to tangle themselves into when you stopped paying attention.


Mrs. Birch wasn’t just a stern piano instructor, she was also a tattle-tale.  If she felt that you weren’t up to snuff, she would take the accompanying parental figure aside when they came back to pick you up and tell them that you clearly hadn’t been practicing enough.  Since the result of this was a crackdown that I wished to avoid, I was left with an extremely strong incentive to be good enough at the lessons to avoid such an outcome. Unfortunately, my equally strong disincentive to practice in favor of my much-preferred reading left me in a bit of a pickle, since it was hard to look like I’d practiced when I really hadn’t.  Thus, I became really good at sight-reading.


When it comes to science, my problem isn’t that I don’t like to practice.  In fact, I read and think about science all the time.  But the trouble is, I have a terrible memory.  I know that this admission blows a hole in the scientific mystique, because if you watch television, every scientist on TV (besides being an intellectual giant) has what seems like unbelievably perfect recall.  Arguably, it’s their defining characteristic, since they’re rarely shown performing other skills like critical thinking or deep analysis of a problem – with notable exceptions, of course.  On the other hand, like any good story there’s a fairly large grain of truth to it;  those prodigious memories do (mostly) exist, and I’m sure that you even know someone like that.  Many of the best scientific minds I know do, in fact, have fantastic recall of the things in their field, good enough that it makes me inadequate to be in the same room with them.  Even the ones who can barely remember to tie their own shoes without a Post-it note on their laces can rattle off details of papers they read as an undergrad 25 years ago without pausing for breath.

Of course, some perspective is important here, both in assessing others and in assessing oneself.  When looking at the way other scientists recall information and data, it’s important to remember that they’ve spent their entire adult lives on these topics. Sometimes, this is really obvious.  Academia is a training program in narrowing your focus until you’re the world’s foremost expert in an area of knowledge so tiny that sometimes you’re also the only person in the world who cares;  anyone who has gone through the Ph.D. process and received a degree at the end is going to be able to spout reams of facts about their chosen topic, even if only from sheer self-defence.  And, when you feel insecure, it’s also easy to suffer from a perverse confirmation bias, where you only remember the times that other people sound smart and make you feel stupid by comparison.  In academia, there’s always plenty of smart people to make you feel inadequate, much like I imagine women feel when reading fashion magazines, flipping through page after page of ads that make them feel like they have to measure up to the impossible standards depicted therein. And, finally, it’s easy to exhibit sampling bias that borders on a half-baked solipsism:  since you can peer inside your own thoughts and see all of the failures of your own memory and cognition in real-time but the thought processes of others are opaque, it’s possible to forget (or disbelieve) that others can feel like that too.

With those caveats firmly in place, it is still pretty clear to me that I’m not that scientist.  You know the one: the diamond-tipped bit on the drill of science, driving a hole into our uncertainty and powering through to the truth in their field.  These are the sorts of scientists whose praises are rightfully sung for their life-long dedication to a field;  to pluck a name out of the air at random, I’m thinking of people like Frans de Waal, who has spent decades expanding our knowledge of primate social and evolution.  This dedication to primate behaviour has rewarded him handsomely with world-wide recognition as one of the foremost names in this field, which is as it should be [1]. But I’m just not that guy.  I can’t face the idea of an entire career spent drilling down into one topic;  there’s too much out there, and I want to play in more than just one sandbox.  You can see this in my academic history, where I’ve wandered from computer science (undergrad), detoured briefly into classics, back to psychology (undergrad and M.Sc), to behavioural ecology (Ph.D.), and now I’ve dropped into the depths of evolutionary biology to work on the dynamics of viruses and bacteria.  Unfortunately, this sort of academic field-hopping is viewed with suspicion, at best (“Narrow and deep is good.  Shallow and broad is usually not appropriate”).  And, it doesn’t really maximize my production of papers, which means that if I want to maintain an academic career I will probably need to settle down soon.  In truth, I think I’m getting there, because I keep coming back to questions relevant to behavioural ecology even when studying pathogens [2].

In sum, the path my career has taken me and my cognitive limitations have left me with this basic truth:  I’m a scientific sight-reader.  What does that mean?  Lacking prodigious recall and Renaissance-man tendencies, it means that I’m always in the soup.  For one thing, I’m always having to look things up, even surprisingly basic things.  Usually, it’s to confirm to myself that my memory hasn’t played tricks on me (a problem that arises because I don’t tend to use the same techniques and knowledge repeatedly), but sometimes its simply because my background is shallow, not deep.  Sometimes, I’ve missed things.  In meetings or conversations, I have to think hard, because I need to be a step ahead of the conversation if I’m going to be of any use to it.  I need to find hooks to the knowledge that I do have, ways that I can draw analogies to and from things I know, applications to a problem that come from my background.  I’m always squinting, mumbling, and cursing my way through the fog of uncertainty, scrambling to stay one step ahead before I lose the plot. Like many things, this is both a blessing and a curse.  To the positive, though I’m not a world-expert in most of what I do, I can usually contribute something just by virtue of having that broad toolkit.  My years of experience with programming has led me to carving out a nice niche as a simulation guy in behavioural ecology – “computer jockey”, as my Ph.D. advisor put it.  That, my study of behaviour, and my background in statistics did get me the postdoc I’m holding now; I may be master of none, but I’m still a jack of all trades.  On the other side of the coin is the problem of depth.  I often need to rely on other, smarter people to make sure that I’m not making basic mistakes.  This isn’t all bad, as I love spending time picking the brains of those smart people and working with them.  But it certainly does not promote self-confidence, and I always feel like I’m one step away from being exposed as a fraud (though it’s probably sub-clinical).

Am I a poorer scientist for it?  In these days of increasing specialisation and balkanisation of scientific fields, there are many who would say yes.  And perhaps they’re right.  My interests tend toward the interdisciplinary, an idea to which much lip service is paid and little support seems to be given.  It’s certainly caused me no end of troubles, and it will probably keep causing more.  But I remain stubbornly convinced that there are benefits, too. Perhaps being forced to constantly leap about to stay in front has made me pay attention, if nothing else.  I don’t know where this will lead, but I do know one thing:  even if I’m not destined to change the world with my science, there’s nothing else on this earth that I would rather be doing with my life.


Recitals and exams, those were the parts of playing piano that I always hated the most.  They drew most strongly on the skills I had avoided, including long hours of careful repetition and memorisation down to the last note.  Not to say that good piano players are robots;  quite the contrary, that laser-like focus provides the raw material for some of the greatest musicianship known.  That sort of depth frees you to be creative by moving your cognition about what you’re doing to a more sophisticated level, somewhat like having a large vocabulary frees you to be able to read almost anything without needing to parse the text every time.  

I was never going to be a musician.  This doesn’t mean that my time was wasted, though, or that all was lost.  Years later, when I was hopelessly in love with the woman who would later become my wife (and she was as yet completely uninterested in me), I sat down with her at the piano and played an arrangement of a song I loved that I had never seen before.  I had played other arrangements of this piece and loved them, but this particular set of squiggles on the musical page was new to me.  Yet when I sat down at the piano and she snuggled in beside me, I was inspired to play that song as though I had been practicing it all my life.  And through the years, we’ve  talked about the history of the piece, and I’ve told her about some of the science – the mathematics, physics,  psychology, and biology – that informs music and musical experience.   My knowledge may not be deep, but its breadth can lead to unforeseen recombination;  that piece of music and my playing it for her became part of a tapestry of woven skill and knowledge that helped form my relationship with the person I love.  And, at the risk of post-hoc justification, I wouldn’t have it any other way.  

1. To be clear, I’m referring to his work only; I don’t know Dr. de Waal personally, though I have absolutely no reason to believe that he’s anything other than a perfectly nice person.
2. This is something I’ll be blogging more about as I develop the line of research and hopefully present it at ISBE in August.

Is it the 1960s again?

Ritual Sacrifice of the Gummulate Tribe!

Ritual Sacrifice of the Gummulate Tribe! by Grizdave, used under a CC license


Found in a textbook today ([1], p. 14-15), immediately following a discussion of Ebola and Lassa fever infections in humans:

While having the death of a host individual occur as the product of an encounter with a pathogen may seem like a dire outcome, this outcome represents a mechanism of defence operating at the leve l of the host population.  If a particular infectious agent is something against which members of the host population could not easily defend themselves, then it may be better to have that particular host individual die (and die very quickly!) to reduce the possible spread of the contagion to other members of the population.

In other words, if it looks like you’ve been infected by something nasty, you sacrifice yourself to stop its spread for the good of the other members of your population.

Look, I’ll be the first to admit that I hold a dim view of multi-level selection, but I’d be really surprised if anyone in the MLS camp were to make an argument as simple-minded as this.  Virulence is a complex topic, certainly, but the above paragraph could have been lifted from a previously-unknown book by Wynne-Edwards in the 1960s and no one would know the difference.  How is it that people are still getting away with stuff like this forty years after it was first shredded by the likes of George Williams and John Maynard Smith?


1. Christon J. Hurst. Defining the ecology of viruses. In Christon J. Hurst, editor, Studies in viral ecology, volume 1, chapter 1, pages 3–40. John Wiley and Sons, Inc., 2011.

Tagged ,

Rejection Watch Vol. 1(3): Dave Walter

Dr. David Walter is a current member of the Department of Biological Sciences at the University of Alberta (where I did my undergrad and M.Sc.;  unfortunately, I was in the Psychology department then, and never met Dave) and he’s also an advisor to the Royal Alberta Museum on mite behaviour, ecology, and identification.  You can also find him blogging at Macromite’s Blog where he has some quite amazing pictures.  Dave sent me this great story of the perils of naming new species for Rejection Watch:


This only counts as a near rejection, so you may decide to reject it yourself, but your tale of meeting your nemesis at a poster at a conference reminded me of a similar encounter. I’m an acarologist and long ago got used to having my papers sent back with the ‘not of interest to our readership’ theme and soon found the journals that would find mites of interest or learned to hammer through the few papers that more general journals would accept and, other than a few bent nails that couldn’t be straightened, have had a reasonably successful career.

My graduate training was in both ecology and systematics, but at heart I was an ecologist and considered the taxonomy part just plain hard work with no reward. Still, when you need a name to hang some behaviour on, you may have to describe new species. Very early in my first postdoc I found that I was up to my 13th new species description. That seemed a bad sign and having just read an article in Smithsonian magazine about fear of the number 13, I was inspired to name my new species ‘triskadecaphobiae’. Well, first thing that went wrong was the word was too long to fit on a slide label, but by the time that I figured that out the paper was already off to the Annals of the Entomological Society of America.

Eventually the reviews came back on official forms (this was a long time ago) and the second problem appeared – whoever typed the form had misspelled triscadecaphobiae. Still, one review was okay, but the other is etched in my mind and went more or less like this: ‘Normally I would suggest accepting such a paper with minor revision, but because of the author’s obvious scientific immaturity as evidenced by his choice of a scientific name, I recommend rejection. A scientific name should march down the ages as a testimony to the good taste of the author …’ and so on for several stinging sentences.

Later, it turned out that a friend of mine was visiting the lab of the referee at the time my paper had arrived and the ref had come storming out of his office red-faced, waving my paper, and shouting ‘who is this arsehole Walter and who does he think he is?’ and possibly other less kind things that I have forgotten. It also turned out that another friend – we had been graduate students together – was a postdoc in this lab. I had previously named another species in honour of this student by appending the Latin for ‘belonging to’ (-ianus) to his name. This is perfectly correct (and went to a French journal, so they noticed nothing amiss), but of course, sounds rather asinine. ‘X-ianus’ was, basically, puerile and I suppose every time my friend saw the name he was annoyed and let people know it. So, I was reaping what I had sowed.

The editor of the journal, a famous entomologist who I suppose should remain nameless, was very nice and asked me for my opinion. I thought about the name for a while, considered the problem with the labels and a future of misspellings (not to mention my reputation), and suggested that I name the species after the editor – by appending an ‘i’ to his name. That worked perfectly and I was able to get past #13 and add another line to my CV. It was 20 years before I found the mite again, but it turns out to be common in the northern Great Plains, so the name has turned out to be both useful and honours a great entomologist.

I still manage to sneak a pun into a paper every now and then, and at least one species name has made a list of such irreverences (Funkotriplogynium iagobadius – species named after the King of Funk, James Brown, – but the genus was someone else’s and the species from one of my postgraduate students following in my mould and I just went along). So, I guess I’ve never really learned to grow-up completely, but I have become more circumspect (and insidious).

Even better, I ran into the referee in front of a poster at an Ent Soc meeting a couple of years later and stopped to introduce myself and admit to triskadecaphobiae. He turned out to be delightful and we subsequently enjoyed a productive correspondence. Turns out his comments to the editor were much less vitriolic than his comments to the author and he was simply taking the opportunity take me down a peg. Still, I wonder what will happen when I get to my 13th new genus …


I want to thank Dave for sending me that and remind you that Rejection Watch is driven entirely by reader submissions, so if you’ve been holding on to yours until now, get them into me!  That email address again is, so send me your best academic rejection story now, and I’ll throw in this free juicer*!


*Limit of 0 juicers per person

Fixing HieroTeX on OS X Lion.

I have some weird hobbies, including puttering with ancient languages.  As part of that, I’d like to be able to typeset Egyptian hieroglyphic (not ‘hieroglyphics’:  pet peeve of mine.  Hieroglyphic is the language, hieroglyphs are the characters;  saying ‘hieroglyphics’ is like saying ‘I speak Frenchs’) in LaTeX using TeXShop for OS X.  There’s a great package for doing this, HieroTeX, and TeXShop actually comes with clear and usable instructions for installing it to work on a Mac (see the instructions in ~/Library/TeXShop/Engines/Inactive/Hiero).  Unfortunately, there’s one problem:  HieroTeX, as it is currently compiled and distributed, is a PowerPC application that requires Rosetta to run.  Since Lion has been removed from Lion, the first time you try to typeset anything using TeXShop and HieroTeX, the process will fail with a pop-up telling you that the executable file sesh is no longer supported because it is a PowerPC application.

I’ve managed to recompile the sesh executable from source on a Lion machine.  If you’re running TeXShop on Lion, this may help you.

Here’s how to fix this.  The short version:

  2. Download from here, and unzip the file (let’s say onto your Desktop).
  3. Place the new sesh into /usr/texbin (you will likely be asked for your admin password):   cd ~/Desktop; sudo cp sesh /usr/texbin

This isn’t guaranteed to work at all, but I’ve tested it with TeXShop 3.06 running on Lion (10.7.2).  Lion should be the only version of OS X that requires this fix (I think).

The long way is to recompile sesh from source yourself.  If you want to do that, you’ll need to grab the source from here.  Get the file called HieroTex.tar.gz and unzip it.  It will create a source tree with the sesh source in the Seshnesu subdirectory.  To make it, use make sesh (or muck with the variables in HieroTex/ first, but it worked for me without changes).  Unfortunately, trying to compile the unmodified source will fail because the code uses malloc.h, which is outdated and won’t be found on Lion.  To fix this, grep the source files and replace every instance of #import <malloc.h> with #import <stdlib.h>, unless it has already been imported elsewhere in the file, in which case just delete the malloc line.  It should compile now, and you can move it to the texbin directory as above.  After that, TeXShop should successfully typeset HieroTeX.  A word of warning:  the first round of typesetting after installing HieroTeX and replacing sesh will take forever as the fonts get sorted out.  Just go get yourself a drink and wait.  If you like, here’s a simple file to test the install that I got from the HieroTeX manual:

A1 \end{hieroglyph}

If this file is successfully typeset by TeXShop, you’re up and running!

Tagged , , , , ,

Just a picture of a flower.

Just a picture of a flower that I took in Centennial Park (in Sydney) a little while ago.  I liked it, nothing more than that.

Science is grey, not black and white.

tl;dr Blaming scientists for the knowledge they produce is a pointless task fraught with insoluble problems.  It will work better if we police the use of knowledge rather than its production.

Over at TheScientist, Heather Douglas has put forth her opinion on the moral responsibility of scientists in a piece entitled “The Dark Side of Science”.  Her argument is fairly straightforward:  science can have bad consequences, and if we as scientists can foresee those consequences, we have a moral responsibility to not do the research or to hide the results of this research away.  She says:

 As scientists develop ways to generate sequences of base-pairs ever more cheaply and efficiently, the opportunity for the malicious or the simply unreflective to play with pathogens to see what kind of traits arise looms larger.  And it is not just technological know-how that can be problematic.  The detailed knowledge of cellular or genetic functioning can have worrisome implications as well.  Knowledge of what makes a virus more transmissible can assist us in detecting when a virus might be more prone to producing an epidemic, but it could also be used to make viruses more virulent.

In sum, scientists are responsible for both what they intend to achieve and that which is readily foreseeable, as we all are.  There is nothing inherent in becoming a scientist that removes this burden of responsibility.  The burden can be shared—scientists can come together to decide how to proceed, or ask for input from ethicists, social scientists, even the broader public.  Alternatively, scientists could decide (and it has been proposed) that some forms of regulation—either in the selection of projects or in the control and dissemination of results—be imposed on the field of synthetic biology, to reduce the risks.  The more oversight scientists submit to, the less responsibility they bear, but it comes at the cost of the freedom to choose the type of work they can do and how they do it.  This is the essential tension:  as long as there is freedom of research, there is the responsibility that comes with it.

Well, now.
There’s two big problems with this piece.  The first hinges on the idea that we can control the production of knowledge, as though it were a tap that only we have our hands on.  My comment on her piece was biting, but I’m not particularly inclined to apologize for that:
Ah, yes, because if we decide not to do research due to the possibly dangerous side-effects, that means that no one else can discover that knowledge either!  Whew, what a relief.  It would be terribly frightening if the scientific method were available to just *anyone*, lab researcher or terrorist bent on biowarfare alike.
Even putting aside the problem of intention vs outcome, as Brian mentioned, I would refer you to the computer security industry and the disastrous failure of security through obscurity policies.  In the days of $1000 genomes and DIYbio, the idea that we can stuff the genie back in the bottle if we just cover our ears and loudly declare “Nah nah nah nah, not researching that!” is ludicrous.

Science is a distributed process, and knowledge is out there for anyone who cares to look for it.  If I “foresee” that researching zebra finch song production (as an example of research I don’t do) will lead to bioweapons production in the Middle East, I can … what?  Send out a strongly worded email to every lab working on zebra finches across the world and hint that zebra finch songs hold the key to the apocalypse?  Secretly poison every zebra finch in existence to stop research on this ubiquitous lab animal?  I suppose that it’s not entirely impossible that I could convince every legitimate scientific lab on earth to stop working on zebra finch songs – I’d have to schlep all over the world to do it, and somehow convince every PI working on it to stop (academic cat herding, anyone?) – but even if I did, what then?  Zebra finches aren’t that hard to find or work on, so stopping research on this bioweapons outcome would somehow require forcing mainstream labs across the globe to maintain absolute secrecy while also forcing them not to work on this problem.  And this all depends crucially on human beings not being curious. That hasn’t worked out very well in the past.

The issue is that there’s no single body that is responsible for science, and no way to control the production of knowledge.  This works in our favour when it comes to religious groups and other nutjobs who wish to squash the advancement of human knowledge;  it doesn’t when we would like to put the cat back in the bag.  So, in this regard, Douglas’ argument is finished before it begins, because what she is asking for is impossible on the face of it.  It might be possible to slow things down, and hiding details may be a responsible thing to do in some circumstances (though I’ve yet to see a convincing case for this), but if the knowledge is valuable it is free for anyone to find.

The second problem with Douglas’ argument is the idea of “foreseeable”.  She’s rather vocal about this:

Many of the comments seemed to miss the key point that scientists are responsible for foreseeable consequences only– and I would want to restrict that to specifically and clearly foreseeable consequences.  This bounds the scope of responsibility, and in the way that is done in everyday life (and in courts of law– the reasonable person standard for foreseeability is the common standard for issues of negligence).

This acts in both the past and the future. In the past, retroactively deciding what was or was not foreseeable in the production of human knowledge is going to be an impossible task.  Science is, by definition, the process of creating knowledge and understanding that does not yet exist;  how that will interact with the body of knowledge we currently have and the body of knowledge that we have yet to produce is not foreseeable by any reasonable person.  No reasonable person has a system for deciding what will be the outcome of research – if we did, we wouldn’t need to do the research.  The reasonable person standard doesn’t apply here, because by definition you are retroactively assigning probabilities to events that effectively had a uniform distribution (when you produce the research, all things are possible).  Oh, but doing research into virulence is more likely to produce bioweapons, you say!  Really?  Or is it just as likely to produce a way to stop them (more below)?  Note that I’m speaking about basic research here, not applied research directly focused on producing weapons.  If your intention is produce a weapon, you bear the responsibility of creating that weapon;  if your intention is create new knowledge, assigning responsibility for what is done with that knowledge is a lot less clear.

In short, I would argue that “specifically and clearly foreseeable” is a red herring;  my argument is that producing new knowledge always has consequences that are not specifically and clearly foreseeable.   She uses a legal definition that doesn’t apply, and then wants us to abide by it. Ah, you say, but some consequences of research are easily foreseeable:  if you produce a new viral strain with higher virulence, then a bioterrorist could use that to infect a population and kill millions.  But my response is now in the future:  unintended consequences cut both ways.  Producing a viral strain with higher virulence could be the thing that cracks the problem and lets us produce a vaccine or cure for the virus in question;  only by understanding how something works can we stop it.  And if producing that knowledge could have saved millions of lives but Douglas stops it, does she accept responsibility for the deaths?  If millions die and someone later uses that viral strain to produce a cure, do we get to punish Douglas because she stopped us from finding that cure when it was “specifically and clearly foreseeable”?  After all, I’m a reasonable (and reasonably educated) person, and I can conceive of many ways that such research could have helped.

Herein lies the core of the problem:  science is grey, not black and white.  There’s a lot of science that has both good and bad applications.  The laws of classical mechanics apply to nearly every scale of our daily life and make it better in many ways, from the curve of a thrown ball in sports to the launching of satellites into space, but they have also produced nearly all of the tools of conventional war (bullets to bombs).  The discovery of the structure of the atom has left us with nuclear power and nuclear weapons.  Douglas might say that I’m misconstruing her argument, that it’s only “specifically and clearly foreseeable” if it’s on a small scale and really obvious.  But again, I say that this is misdirection:  show me basic research that wasn’t intended to produce a weapon with unambiguously negative consequences.  And when there’s both good and bad, black and white, how do we decide what shade of grey is acceptable?  If it could kill millions (atomic weapons) but potentially provide power enough to help stop climate change (which could potentially impact billions), who gets to decide what is acceptable before we research it?

The fact is, we can’t stop knowledge from being produced.  But knowledge has no moral value or valence in and of itself;  knowing how to make a gun doesn’t force you to construct one and shoot someone with it.  This analogy is apt in another way:  I’m greatly in favour of gun control, and my belief from reading and thinking about the issue is that people cannot be trusted to use guns properly without regulation.  But in scientific terms, this means that we regulate the application of knowledge, not its production.  We don’t tell scientists what they can research, we as a society decide what we’re going to do with what comes out of research.  What I’m proposing is more difficult than what Douglas is proposing, because we don’t get to just cover our eyes and pray that no one else is smart enough to figure out what we’d rather keep hidden.  In my way we have to have hard conversations about how best to deal with and regulate the knowledge when it arrives.  And I’m sorry if that scares her, but that’s the way it’s going to have to be.

Tagged ,

A perfect companion to Rejection Watch…

If you’ve been enjoying the Rejection Watch series on this blog, you might want to head over to The Journal of Universal Rejection and submit your latest masterpiece.  From their home page:

The founding principle of the Journal of Universal Rejection (JofUR) is rejection. Universal rejection. That is to say, all submissions, regardless of quality, will be rejected. Despite that apparent drawback, here are a number of reasons you may choose to submit to the JofUR:

  • You can send your manuscript here without suffering waves of anxiety regarding the eventual fate of your submission. You know with 100% certainty that it will not be accepted for publication.
  • There are no page-fees.
  • You may claim to have submitted to the most prestigious journal (judged by acceptance rate).
  • The JofUR is one-of-a-kind. Merely submitting work to it may be considered a badge of honor.
  • You retain complete rights to your work, and are free to resubmit to other journals even before our review process is complete.
  • Decisions are often (though not always) rendered within hours of submission.

And after you’ve submitted don’t forget to check out their blog, which houses a fabulous collection of rejection letters from their esteemed editors.