Banner
Melville on Science vs. Creation Myth

From Melville's under-appreciated Mardi: On a quest for his missing love Yillah, an AWOL sailor...

Non-coding DNA Function... Surprising?

The existence of functional, non-protein-coding DNA is all too frequently portrayed as a great...

Yep, This Should Get You Fired

An Ohio 8th-grade creationist science teacher with a habit of branding crosses on his students'...

No, There Are No Alien Bar Codes In Our Genomes

Even for a physicist, this is bad: Larry Moran, in preparation for the appropriate dose of ridicule...

User picture.
picture for Hank Campbellpicture for Heidi Hendersonpicture for Bente Lilja Byepicture for Wes Sturdevantpicture for Ian Ramjohnpicture for Patrick Lockerby
Michael WhiteRSS Feed of this column.

Welcome to Adaptive Complexity, where I write about genomics, systems biology, evolution, and the connection between science and literature, government, and society.

I'm a biochemist

... Read More »

Blogroll
Pharyngula has an interesting graph showing the steady improvement in survival rates of childhood leukemia over the last 40 years. A disease that was essentially an automatic death sentence 40 years ago is now over 90% curable. Leukemia is one of the more spectacular cases, but we have seen steady progress in our successful treatment rate for many diseases, especially cancers. In contrast with this success rate, over the last 5-6 years, the success rates for research grant applications have gone steadily down. Part of this is due to a growing, more competitive science community, but in that growing community are some very good ideas that are not being funded. Most colleagues I've discussed this with agree that probably ~65-70% of these grant applications shouldn't be funded (at least not without serious revisions). In the current funding climate though, the success rate is too low, and worthwhile health science research is not being done.

In biology, everything has a history. Creationists love to try to calculate the probability of a new gene spontaneously coming into existence, but that's not how genes are born. New genes most often come from other genes: one gene gets duplicated by a freak accident (like the accidental duplication of a chunk of chromosome, a whole chromosome, or even an entire genome), so that you suddenly have a cell with two working copies of the same gene. As time goes on (that is, time on an evolutionary scale), those two duplicate genes start to divide up the work that was originally done by just one gene. One copy might end up specializing in one particular task, picking up mutations along the way that gradually transform this copy into an independent gene in its own right, with its own specialized function. From one gene, you get two, each with a distinct role in the cell.

It sounds like a nice evolutionary story, but do scientists have any real examples of duplicate genes evolving new functions?

The NY Times today has a profile on Congress's 3 PhD physicists. Congressman Vernon Ehlers (R, Michigan), PhD, tells the NY Times that he has had to stop his Congressional colleagues from trying to cut funding for research on game theory and ATM - apparently some politicians thought that game theory is about sports, and that ATM in a science proposal really does stand for automatic teller machines. (In the particular case, ATM means Asynchronous Transfer Mode, a fiber-optic data transfer protocol).
One of the most useful things about having our brains is the ability to anticipate predictable events: we can see that it's going to rain, or that it's getting dark, and prepare accordingly. Some things in life are completely random, some are almost perfectly predictable (like the sun rising tomorrow at 5:35 AM in the Midwest US), and most other things are not quite so regular as the sunrise, but predictable nonetheless. We use neural networks in our brain to anticipate these events, but anticipation is obviously not limited to organisms that have brains. How do other organisms, like the single-celled E. coli anticipate change?

Scientists have been trying to understand how and when we gain or lose fat cells, and now a paper in this week's issue of Nature reports that nuclear bombs are the key to solving this problem.

To understand how our bodies regulate our weight, researchers are interested in knowing how the number of fat cells changes over our lifetime - do we stop making more fat cells after adolescence? Do we keep the same fat cells all of our adult lives, or do some die off and get replaced by new ones? The typical way to study the birth and death of cells in live animals is to use radioactive tracers that label DNA, but these experiments are too toxic to try in humans. It turns out though, that the US and Soviet militaries did the experiment for us, with above-ground nuclear bomb tests in the late 1950's, tests which spewed large amounts of radioactive carbon in the atmosphere. That radioactive carbon is now in our DNA (at least for those of us alive during the cold war), and it provides a convenient "manufactured on" date for our long-lived fat cells.

Let's say you're going to test a new ACE inhibitor drug for lowering high blood pressure. You would think that one of the obvious questions you want answered in this trial is whether the new medication is better than the existing drugs on the market. If the new drug isn't any better, then the only purpose it serves is to pad a drug company's profits, by giving that company a new exclusive drug to market. If your clinical trial doesn't compare the new drug to older drugs, if instead you just compare your new drug to a placebo, then you have no idea which one is better. Even worse, you put half the subjects in your blood pressure study at risk by using a placebo - if a good treatment already exists, it's unethical to deny patients that treatment and give them a placebo instead.