Blog / Making the Cut

Sequence replacement to cure sickle cell disease

Jacob Corn

My lab recently published a paper, together with outstanding co-corresponding authors David Martin (CHORI) and Dana Carroll (University of Utah), in which...

READ MORE

My lab recently published a paper, together with outstanding co-corresponding authors David Martin (CHORI) and Dana Carroll (University of Utah), in which we used CRISPR to reverse the causative allele for sickle cell disease in bone marrow stem cells. This work got some press, so unlike other papers from the lab I won’t use the blog to explain what we did and found. You can go else where for that. But I do want to explain the motivation behind the work, as well as why we chose this approach.

Sickle cell disease and gene editing

sicklecd750-600x400

Next-generation gene editing is already transforming the way scientists do research, but it also holds a great deal of promise for the cure of genetic diseases. One of the most most tractable genetic diseases for gene editing is sickle cell disease (SCD). The molecular basis has been known since 1949, so it’s relatively well understood. Its root cause is in bone marrow stem cells (aka hematopoietic stem cells, or “HSCs”), which are easy to get to with editing reagents. It’s monogenic and recessive, so you only need to reverse one disease allele for a cure. There’s no widely-used cure – though bone marrow transplantation from healthy donors can cure the disease, very few patients get the transplant for a variety of reasons (unlike severe combined immuno deficiency, another HSC disease in which most patients do get transplants). And we know from various sources that editing just 2-5% of alleles can provide benefit to patients (equates to 4-10% of cells, due to the Hardy-Weinberg principle).

But while several groups have tried to make a preclinical gene editing candidate to cure SCD by replacing the disease allele, most efforts have so far met with challenges. Things at first looked very promising in model cell lines, but when people moved to HSCs they found that the efficiency of allele conversion dropped substantially. Efficacious replacement of disease alleles is something of a holy grail for gene editing, but has so far lagged behind our ability to disrupt sequences.

Sequence knockout to cure SCD

In the field of SCD,  problems with sequence replacement have prompted efforts to find a way to use sequence disruption to ameliorate the disease. The most promising of these approaches lead to the up-regulation of fetal hemoglobin through a variety of mechanisms, including tissue-specific disruption of Bcl11A (it needs to be tissue-specific because Bcl11A does a lot of things in a lot of cells). Upregulation of fetal hemoglobin has been observed in humans (during Hereditary Persistence of Fetal Hemoglobin, or HPFH), and is protective against the sickling of adult hemoglobin.

Fetal hemoglobin upregulation strategies are very exciting, make for some fascinating basic research, and could eventually lead to new treatments for SCD patients. There are a few people in my lab working along these lines, and several other groups are doing incredible work in this area. But while the editing is easier, there are still some major challenges to clinical translation, mostly due to the new, mostly unexplored biology surrounding tissue-specific disruption of Bcl11A.

Sequence replacement to cure SCD

We decided to go back to the basics, and find out if our work on flap-annealing donors and the Cas9 RNP could bring the “boring” approach of sequence replacement back on the table. In this case, we want to be as boring as possible. I’d rather not get surprised by new biological discoveries about gene regulation while we’re working in patients.

autologous_transplantWe also wanted our approach to be as general as possible – to develop something that works for SCD but is accessible to clinical researchers everywhere. That would fit the democratization theme of Cas9. Both the nuclease targeting reagent and the approach to allele replacement would be fast, cheap, and easy for everyone to use, so that everyone could ask questions about their own system in HSCs and maybe even develop gene editing cures for the particular disease in which they’re working.
This is in contrast to some viral-based editing strategies that seem reasonably effective but are very slow to iterate and have a high barrier to entry. If possible, I’d rather develop something that any hematopoietic researcher or clinical hematologist can pick up and use, with rapid turn around from idea to implementation. That’s the way you get to clinician-driven cures for rare genetic diseases, which is the long-term promise of easily reprogrammable gene editing.

All of the above is why we used Cas9 RNP and coupled it with flap-annealing single stranded DNA donors. We found that we didn’t even need to use chemical protection of the guide RNA or single stranded DNA, though we certainly tried that. In short-term editing we found very high levels of editing, and in long-term mouse experiments we found edits almost five-fold higher than previously reported. I think this is certainly good enough for researchers to start using this method to tackle their own interesting biologies. But for us, there’s still a lot of work to be done before this becomes a cure for SCD.

Next steps towards the clinic

First, we need to try scaling up our editing reagents and method (Cas9, guide RNA, donors, stem cells, and electroporation) to clinical levels, and we also need to source them with clinical purity. This is a non-trivial step, but absolutely critical to eventually starting a clinical trial. Second, we need to establish the safety of our approach. We found one major off-target site while doing the editing, and it lies in a gene desert. But just because it’s not near anything that looks important doesn’t mean we don’t need to do functional safety studies! (see my previous blog post about safety for gene editing for more on this) We have a battery of safety studies planned in non-human models, and we want to take a close look to see whether we can take the next step into the clinic.

My collaborators and I are committed to the clinical application of gene editing to cure sickle cell disease, and we hope to start a clinical trial within the next five years. But the data will guide us – we want to be very careful, so that that we know that the cure is not worse than the disease. There are many moving parts involved in these translational steps and some of them are a little slow, so please stay tuned as we try to bring a breakthrough cure to patients with SCD.

X Close

Improved knockout with Cas9

Jacob Corn

Cas9 is usually pretty good at gene knockout. Except when it isn’t. Most people who have gotten their feet wet with gene editing have had an experience like...

READ MORE

Cas9 is usually pretty good at gene knockout. Except when it isn’t. Most people who have gotten their feet wet with gene editing have had an experience like that in the following gel, in which some guides work very well but others are absolute dogs.

 
That’s a problem if you have targeting restrictions (e.g. when going after a functional domain instead of just making a randomly placed cut). So what can one do about it?
 

TL;DR Adding non-homologous single stranded DNA when using Cas9 RNP greatly boosts gene knockout.


 

The problem

There have been a few very nice papers showing that Cas9 prefers certain guides. I refer to these as the One True Guide hypothesis, with the idea being that Cas9 has somehow evolved to like some protospacers and dislike others. The data doesn’t lie, and there is indeed truth to this – Cas9 likes a G near the PAM and hates to use C. But guides that are highly active in one cell line are poor in others, and comparing very preference experiments in mouse cells vs worms gives very different answers. That’s not what you’d expect if the problem lies solely in Cas9’s ability to use a guide RNA to make a cut.
 
But of course, Cas9 is only making cuts. Everything else comes down to DNA repair by the host cell.
 

Our solution

In a new paper from my lab, just out in Nature Communications, we found that using a simple trick to mess with DNA repair can rescue totally inactive guides and make it easy to isolate knockout clones, even in challenging (e.g. polyploid) contexts. We call this approach “NOE”, for Non-homologous Oligonucleotide Enhancement.
(The acronym is actually a bit of a private joke for me, since I used to work with NOEs in a very different context, and Noe Valley is a nice little neighborhood in San Francisco)
 
How does one perform NOE? It’s actually super simple. When using Cas9 RNPs for editing, just add non-homologous single stranded DNA to your electroporation reaction. That’s it. This increases indel frequencies several fold in a wide variety of cell lines and makes it easy to find homozygous knockouts even when using guides that normally perform poorly.
 
The key to NOE is having extra DNA ends. Single stranded DNA works the best, and even homologous ssDNAs one might use for HDR work. We tend to use ssDNAs that are not homologous to the human genome (e.g. a bit of sequence from BFP) because they make editing outcomes much simpler (NHEJ only instead of NHEJ + HDR). But double stranded DNAs also work, and even sheared salmon sperm DNA does the trick! Plasmids are no good, since there are no free ends.
 
We know that NOE is doing something to DNA repair, because while this works in many cell lines, the molecular outcomes differ between cells! In many cells (5/7 that we’ve tested), NOE causes the appearance of very large deletions (much larger than you would normally see when using Cas9). But in 2/7 cells tested, NOE instead caused the cells to start scavenging little pieces of double stranded DNA and dropping them into the Cas9 break! The junctions of these pieces of DNA look like microhomologies, but we haven’t yet done the genetic experiments to say that this is caused by a process such as microhomology mediated end joining.
 

What’s going on here?

How can making alterations in DNA repair so drastically impact the apparent efficacy of a given guide? We think that our data, together with data from other labs, implies that Cas9 cuts are frequently perfectly repaired. But this introduces a futile cycle, in which Cas9 re-binds and re-cuts that same site. The only way we observe editing is when this cycle is exited through imperfect repair, resulting in an indel. Perfect repair makes a lot of sense for normal DNA processing, since we accumulate DNA damage all the time in our normal lives. We’d be in a sorry state indeed if this damage frequently resulted in indels. It seems that NOE either inhibits perfect repair (e.g. titrating out Ku?) or enhances imperfect repair (e.g. stimulating an ATM response?), though we are still lacking direct data on mechanism at the moment.
cas9cycle
 

What is it good for?

The ability to stimulate incorporation of double stranded DNA into a break might be useful, since non-homologous or microhomology-mediated integration of double stranded cassettes has recently been used for gene tagging. But we haven’t explicitly tried this. We have also found NOE to be very useful for arrayed screening, in which efficiency of the edit is key to phenotypic penetrance and subsequent hit calling.
 
Importantly, NOE seems to work in primary cells, including hematopoietic stem cells and T cells. We’ve been using it when doing pooled edits in unculturable primary human cells, and find that far higher fractions of cells have gene disruptions when using NOE. We’ve so far only worked in human cells with RNP, and I’m very interested to hear peoples’ experience using NOE in other organisms. We haven’t had much luck trying it with plasmid-based expression of Cas9, but other groups have told me that they can get it to work in that context as well. 
 

How do I try it?

So if you’re interested, give it a shot. The details are all in our recent Nature Communications paper, but feel free to reach out if you have any more questions. This work was done by Chris Richardson (the postdoc who brought you flap-annealing HDR donors), Jordan Ray (an outstanding undergrad who is now a grad student at MIT), and Nick Bray (a postdoc bioinformatics guru).

X Close

Safety for CRISPR

Jacob Corn

This post is all about establishing safety for CRISPR gene editing cures for human disease. Note that I did not say this post is about gene editing off-targets....

READ MORE

This post is all about establishing safety for CRISPR gene editing cures for human disease. Note that I did not say this post is about gene editing off-targets. We’ll get there, but you might be surprised by what I have to say.

Contrary to what some might say or write, most of us gene editors do not have our heads in the sand when it comes to safety. From a pre-clinical, discovery research point of view, the safety of a given gene editing technology is relatively meaningless. There are many dirty small molecules out there that you’d never want to put in a person but are ridiculously useful to help unravel new biology. Given that pre-clinical researchers have the luxury of doing things like complementation/re-expression experiments and isolating multiple independent clones, let’s dissociate everyone’s personal research experience with CRISPR (all pre-clinical at this point) from questions of safety. Those experiences are useful and informative, but so far anecdotal and not necessarily tightly linked to the clinic.

Despite what we gene editors like to think, CRISPR safety is not a completely brand new world full of unexplored territory. While there are some important unanswered questions, there’s a lot of precedent. Not only are other gene editing technologies already in the clinic, but non-specific DNA damaging agents are actually effective chemotherapies (e.g. cisplatin, temozolomide, etoposide). In the latter case, the messiness of the DNA damage is the whole point of the therapy.

Here are a few thoughts about the safety of a theoretical CRISPR gene editing therapy, in mostly random order. I’ll preface this by saying that, while I have experience in the drug industry, I’m by no means an expert on clinical safety and defer to the real wizards.

Safety is about risk vs reward

Let’s start with a big point: as with any disease, the safety of a gene editing therapy is all about the indication. The safety profile of a treatment for a glioblastoma (very few good treatment options for a fatal disease with fast progression) will be very different than a treatment for eczema. And the safety tolerance of a glioblastoma treatment that increases progression-free survival by only two days is going to look different than one that increases overall survival by five years. So there won’t be One True Rule for gene editing safety, since most of the equation will be written by the disease rather than the therapy.

Gene editing safety is about functional genotoxicity

While most of the safety equation is about the disease, the treatment itself of course needs to be taken into account. At heart, gene editing reagents are DNA damaging agents, and so genotoxicity is a big concern. Does the intervention itself disrupt a tumor suppressor and lead to cancer? Does it break a key metabolic enzyme and lead to cell death? As mentioned above, there are plenty of DNA damaging agents that wreak havoc in the genome, but are tolerated because due to risk/reward and lack of a better alternative (I’m especially looking at you, temozolomide). The key to this point is function of what gets disrupted.

With CRISPR, we have the marked advantage that guide RNAs tend to hit certain places within the genome. We know how to design the on-target and are still figuring out how to predict and measure the off-target. But even with perfect methods for off-targets, we’d still need to do the functional test. Consider a “traditional” therapeutic (small molecule or biologic) – while an in vitro off-target panel based on biochemistry is valuable, it’s no substitute at all for normal-vs-tumor kill curves (as an example). And even those kill curves are no substitute for animal models. The long term, functional safety profile of a gene editing reagent is the key question, and with CRISPR I’d argue that we’re still too early in the game to know what to expect. The good news is that ZFNs so far seem pretty good, giving me a lot of hope for gene editing as a class.

Gene editing safety is NOT about lists of sequences

You’d think that determining an exhaustive list of off-target sequences would be a critical part of any CRISPR safety profile. But in the example above, contrasting in vitro biochemical assays with organismal models, I consider lists of off-targets to be equivalent to the biochemical assay. I’m going to be deliberately controversial for a moment and posit that, for a therapeutic candidate, you shouldn’t put much weight on its list of off-target sites. As stated above, you should instead care about what those off-targets are doing, and for that you might not even need to know where the off-targets are located.

When choosing candidate therapeutics in a pre-clinical mode, lists of off-target sequences can be very useful in order to prioritize reagents. If one guide RNA hits two off-target sites and another hits two hundred, you’d probably choose the former rather than the latter. But what if one the two off-targets is p53? What if the two-hundred are all intergenic? Given the fitness advantage to oncogenic mutations, the math involved in using sequencing (even capture-based technologies) to detect very rare off-target sites is daunting. Being able to detect a 1 in a million sequence-based event sounds incredible, but what if you need to edit as many as 20 million cells for a bone marrow transplant? That’s twenty cells you might be turning cancerous but never even know it. Now we come right back around to function – you should care much more about the functional effect of your gene edit rather than a list of sequences. That list of sequences is nice for orders-of-magnitude and useful to choose candidate reagents, but it’s no substitute for function. 

Is gene editing safety about immunogenicity?

There are two big questions around gene editing immunogenicity: the immunogenicity of the reagent itself, and on-target immunogenicity if the edit introduces a sequence that’s novel to the patient. What happens when the reagent itself induces a long-term immune response? For therapies that require repeat dosing, this can kill a program (hence a huge amount of work put into humanizing antibodies). A therapy that causes someone to get very sick on the second dose is not much good, nor is it useful if antibodies raised to the therapy end up blocking the treatment. But what about in situ gene editing?

Most of in situ gene editing reagents are synthetic or bacterial and so one might raise antibodies against them, but the therapy itself is (ideally), one-shot-to-cure. In that case, as long as there’s not a strong naive immune response, maybe it doesn’t matter if you develop antibodies to the editing reagent? There are few answers here for CRISPR, and most work with ZFNs has been with ex vivo edits, where the immune system isn’t exposed to the editing reagent. Time will tell if this is a problem, and animal models will be key. Even more subtle, what happens when a gene edit causes re-expression of a “normal” protein that a patient has never before expressed (e.g. editing the sickle codon to turn mutant hemoglobin into wild type hemoglobin)?

The potential for a new immune response against a new “self” protein is probably related to the extent of the change – a single amino acid change (e.g. sickle cell) is probably less likely to cause problems than introducing a transgene (e.g. Sangamo’s work inserting enzymes into the albumin locus for lysosomal storage disorders and hemophilia). But once again, I’ve heard a lot of questions and worry about this problem but very few answers. In vivo experiments are desperately needed, and the closer to a human immune system the better.

Moving forward based on the predictive data

As you’ve probably gathered by now, I have a healthy respect for functional characterization when it comes to safety. That’s why it’s absolutely critical that we keep moving forward and not let theoretical worries about arbitrary numbers of off-targets stifle innovation without data. These are tools that could some day help patients in desperate need and with few other options, so let the truly predictive functional data rule the day.

X Close

CAR-Ts and first-in-human CRISPR

Jacob Corn

(This post has been sitting in my outbox for a bit thanks to some exciting developments in the lab, so excuse any “dated” references that are off...

READ MORE

(This post has been sitting in my outbox for a bit thanks to some exciting developments in the lab, so excuse any “dated” references that are off by a week or two)

My news feeds have been alight with the news of Carl June’s (U Penn) recent success at the RAC (Recombinant DNA Advisory Committee) in proposing to use CRISPR to make better CAR-Ts. Specifically, they’re planning to delete TCR and PD-1 in NY-ESO-1 CAR-Ts, melding checkpoint immunotherapy with targeted cell killing. Back in April 2015 I predicted a five year horizon for approval of ex vivo therapies based on a gene edited cell product, and this green light, leading to a Phase I trial, could be consistent with that timeframe. Crazy to think…    
CAR-Ts are a clear place where gene editing can make an important difference in the short term, and companies are already hard at work in this arena – Novartis has a collaboration with Intellia and Juno is working with Editas. In this context, the work done by June’s team is incredibly important and paves the way for therapeutic cell products made with gene editing.
But most news headlines declare this “the first use of CRISPR in humans”. While technically true, in that gene editing is being used to make a cell product, this description doesn’t really sound right to me. The headline is that we’re about to start doing gene editing in humans, with the implication that genetic diseases are in the crosshairs.
But CAR-Ts of course target cancer, and are not permanent additions to a patient’s body. You can stop taking CAR-Ts in a way that is difficult or impossible with other genetically modified cell products (e.g. edited iPSCs for regenerative medicine) or gene editing therapeutics (e.g. in situ reversal of a disease allele). And there are other ways to tackle some of these cancers (though none nearly so effective). We’ll learn a lot from these next-gen CAR-Ts and I have no doubt that they’ll impact patients, but will the main learnings be specific to CAR-Ts and oncology? Is this an important step in establishing safety or efficacy for CRISPR as a way to cure genetic diseases?
We know that primary T cells are actually surprisingly easy to edit, that editing doesn’t cause T cells to keel over, and Sangamo paved the way with ZFN-modified T cells for HIV some time ago. But each Cas-based reagent potentially has unique advantages and liabilities (especially genotoxic) and nothing has yet been used at scale. Perhaps we’ll learn something generalizable about using these proteins to make therapeutic products, and maybe set the stage for functional safety assays? Maybe a main benefit is to get the public comfortable with the idea of gene editing to combat disease?
Cancer immunotherapy is certainly a powerful meme at the moment and tapping in to that good feeling could raise awareness and public perception of gene editing. I’m looking forward to results from June’s work with great excitement, whether it’s aimed at genetic disease or not.

X Close

CRISPR Challenges – Imaging

Jacob Corn

This post is the first in a new, ongoing series: what are big challenges for CRISPR-based technologies, what progress have we made so far, and what might we look...

READ MORE

This post is the first in a new, ongoing series: what are big challenges for CRISPR-based technologies, what progress have we made so far, and what might we look forward to in the near future? I’ll keep posting in this series on an irregular basis, so stay tuned for your favorite topic. These posts aren’t meant to belittle any of the amazing advances made so far in these various sub-fields, but to look ahead to all the good things on the horizon. I’m certain these issues are front and center in the minds of people working in these fields, and this series of posts is aimed to bring casual readers up to speed with what’s going to be hot.

First up is CRISPR imaging, in which Cas proteins are used to visualize some cellular component in either fixed or live cells. This is a hugely exciting area. 3C/4C/Hi-C/XYZ-C technologies give great insight into the proximity of two loci averaged over large numbers of cells at a given time point. But what happens in each individual cell? Or in real time? We already know that location matters, but we’re just scratching the surface on what, when, how, or why.

CRISPR imaging got started when Stanley Qi and Bo Huang fused GFP to catalytically inactivated dCas9 to look at telomeres in living cells. Since then, we’ve seen similar approaches (fluorescent proteins or dyes brought to a region through Cas9) and a lot of creativity used to multiplex up to three colors. There’s a lot more out there, but I want to focus on the future…

What’s the major challenge for live cell CRISPR imaging in the near future?

Sensitivity

Most CRISPR imaging techniques have trouble with signal to noise. It is so far not possible to see a fluorescent Cas9 binding a single copy locus when there are so many Cas9 molecules floating around the nucleus.  So far imaging has side-stepped signal to noise by either targeting repeat sequences (putting multiple fluorescent Cas9s in one spot) or recruiting multiple fluorophores to one Cas9. Even then, most CRISPR imaging systems rely on leaky expression from uninduced inducible promoters to keep Cas9 copy number on par with even repetitive loci.  Single molecule imaging of Halo-Cas9 has been done in live cells, but again only at repeats. Even fixed cell imaging has trouble with non-repetitive loci. Sensitivity is also a problem for RCas9 imaging – this innovation allowed researchers to use Cas9 directed to specific RNAs to follow transcripts in living cells. But it was mostly explored with highly expressed (e.g. GAPDH) or highly concentrated (e.g. stress granule) RNAs. How can we track a single copy locus, or ideally multiple loci simultaneously, to see how nuclear organization changes over time?

Someone’s going to crack the sensitivity problem, allowing people to watch genomic loci in living cells in real time. Will we learn how intergenic variants alter nuclear organization to induce disease? Will we see noncoding RNAs interacting with target mRNAs during development? With applications this big, I know many people are working on the problem and I’m sure there will be some big developments soon.

X Close

Ideas for better pre-prints

Benjamin Gowen

A few weeks ago, Jacob wrote a blog post about his recent experience with posting pre-prints to bioRxiv. His verdict? “…preprints are still an experiment...

READ MORE

A few weeks ago, Jacob wrote a blog post about his recent experience with posting pre-prints to bioRxiv. His verdict? “…preprints are still an experiment rather than a resounding success.” That sounds about right to me. I’m bearish on pre-prints right now because the very word implies that the “real” product will be the one that eventually appears “in print”. Don’t get me wrong–I think posting pre-prints is a great step toward more openness in biology, and I applaud the people who post their research to pre-print servers. Pre-prints are also a nice work-around to the increasingly long time between a manuscript’s submission and its final acceptance in traditional journals; posting a pre-print allows important results to be shared more quickly. There’s a lot of room for improvement, though. With some changes, I think pre-print servers could better encourage a real conversation between a manuscript’s authors and readers. Here are some of my thoughts on how they might achieve that. I know there are several flavors of pre-print servers out there, but for this post I’m going to use bioRxiv for my examples.

 

Improve readability

It’s 2016, we’ve got undergraduates doing gene editing, but most scientific publications are still optimized for reading on an 8.5×11” piece of paper. Pre-prints tend to be even less readable–figures at the end of the document, with legends on a separate page. The format discourages casual browsing of pre-prints, and it ensures the pre-print will be ignored as soon as a nicely typeset version is available elsewhere. I will buy a nice dinner for anyone who can make pre-prints display like a published article viewed with eLife Lens.

 

Better editability

bioRxiv allows revised articles to be posted prior to publication in a journal, but I would like a format that makes it really easy for authors to improve their articles. Wikipedia is a great model for how this could work. On Wikipedia, the talk page allows readers and authors to discuss ways to improve an article. The history of edits to a page shows how an article evolves over time and can give authors credit for addressing issues raised by their peers. Maintaining good version history prevents authors from posting shoddy work, fixing it later, and claiming priority based on when the original, incomplete version of the article was posted.

 

Crowd-source peer review

Anyone filling in a reCAPTCHA to prove they’re not a robot could be helping improve Google Maps or digitize a book. What if pre-print servers asked users questions aimed at improving an article? Is this figure well-labeled? Does this experiment have all of the necessary controls? What statistical test is appropriate for this experiment? With data from many readers about very specific pieces of an article, authors could see a list of what their audience wants. It looks like we need to repeat the experiments in Figure 2 with additional controls. Everybody likes the experiments in Figure 3, but they hate the way the data are presented.

 

Become the version of record

Okay, this one’s a definitely a stretch goal. Right now pre-prints get superseded by the “print” version of the article, but that doesn’t need to be the case. Let’s imagine a rosy future in which articles on bioRxiv are kept completely up-to-date. Articles are typeset through Lens, making them more readable than a journal’s PDF. There’s a thriving “talk” page where readers can post comments or criticisms. Maybe the authors do a new experiment to address readers’ comments, and it’s far easier to update the bioRxiv article than to change the journal version. At that point, bioRxiv would become the best place to browse the latest research or make a deep dive into the literature. Traditional journals could still post their own versions of articles, provided they properly cite the original work, of course.

X Close

Undergraduates change the world

Jacob Corn

I’m a big fan of undergraduate research as a way of connecting students to scientific discovery (as opposed to toy example often used in teaching labs). And...

READ MORE

I’m a big fan of undergraduate research as a way of connecting students to scientific discovery (as opposed to toy example often used in teaching labs). And so there are a lot of undergrads running around the lab, all paired up with either a research scientist, postdoc, or grad student. We had about eight undergrads this past year, four of whom are working on senior theses (and hence are headed out the door soon).

Once a year, I have all of the undergrads give talks in big, festival-style meetings that are informally called Undergradapalooza. Volunteers give 10 minute talks (+ 5 minutes for questions) that are meant to be fun ways to tell the rest of the lab about their projects and progress. The thesis-writers give much more formal 30 minute talks that should cover a whole mini-story.

Undergradapalooza was this last Monday and Tuesday (2 hours per day), and I was blown away by the quality of the presentations and the students’ accomplishments. One student, who started just last semester and had never done mammalian cell culture before, endogenously Flag-tagged a gene in Jurkat cells – all the way from designing the experiment to sequence verifying several homozygously-tagged clones. Another student systematically compared several guide RNA formats (sgRNA vs crRNA:tracr, IVT vs synthetic, etc) and made sgRNAs targeting ninety(!) different genes for an arrayed screen. Yet another student was a driving force behind our sickle cell editing work – internally, all of the sickle sgRNAs are named after her (J1, J2, J3, etc.) – and this semester she used next generation sequencing to test wild type Cas9 vs both improved specificity Cas9 variants at the sickle codon and several off-target loci apiece in biological triplicate. [undergrads reading this blog, if I didn’t mention your project it’s only for reasons of space/length]

This is incredible! These undergrad students are all kicking butt and taking names. Their productivity is phenomenal – some of these experiments would have been a major part of a Ph.D. just a few years ago.

What does this mean beyond the fact that Berkeley has great students? It gets back to my ulterior motive for immersing undergrads in gene editing. Old fogeys like us current PIs are thinking of ways to use next-gen gene editing and regulation that we consider innovative. These are no doubt exciting, but they’re necessarily burdened by our preconceptions about what’s possible. The really surprising advances will come in five to ten years, when students who have only ever known a world with easy gene editing hit their stride. To them, this crazy tech will be routine!

It’s the every day uses of incredible tech that really changes the world. Just this morning I had a video call with my wife, who is doing epidemiological research in rural Botswana. I was walking down the street in Berkeley and she was in the middle of a field in Africa. During the call I pulled up her GPS coordinates and looked at a satellite view of that same field on Google Maps. The whole experience was far beyond what even early 90’s sci-fi imagined for the future. But it’s now routine in our lives and since a whole generation takes it for granted, they’re already dreaming of the next big thing.

Now take the above paragraph and substitute pervasive gene editing for networking and computing technology. It might take longer to filter from the lab to the sidewalk, but surprising applications will abound and no doubt change all of our lives. Viva la undergrads!

X Close

Why preprints in biology?

Jacob Corn

I’m going to take a step away from CRISPR for a moment and instead discuss preprints in biology. Physicists, mathematicians, and astronomers have been...

READ MORE

I’m going to take a step away from CRISPR for a moment and instead discuss preprints in biology. Physicists, mathematicians, and astronomers have been posting manuscripts online before peer-reviewed publication for quite a while on arxiv.org. Biologists have recently gotten in on the act with CSHL’s biorxiv.org, but there are others such as PeerJ. At first the main posters were computational biologists, but a recent check shows manuscripts in evo-devo, gene editing, and stem cell biology. The preprint crowd has been quite active lately, with a meeting at HHMI and a l33t-speak hashtag #pr33ps on twitter.

I recently experimented with preprints by posting two of my lab’s papers on biorxiv: non-homologous oligos subvert DNA repair to increase knockout events in challenging contexts, and using the Cas9 RNP for cheap and rapid sequence replacement in human hematopoietic stem cells. Why did I do this, and how did it go?

There’s been some divisive opinions around whether or not preprints are a good thing. Do they establish fair precedence for a piece of work and get valuable information into the community faster than slow-as-molasses peer review? Or do they confuse the literature and encourage speed over solid science?

In thinking about this, I’ve tried to divorce the issue of preprints from that of for-profit scientific publication. I found that doing so clarified the issue a lot in my mind.

Why try posting a preprint? Because it represents the way I want science to look. While a group leader in industry, I was comfortable with relative secrecy. We published a lot, but there were also things that my group did not immediately share because our focus was on making therapies for patients. But in academia, sharing and advancing human knowledge are fundamental to the whole endeavor. Secrecy, precedence, and so on are just career-oriented externalities bolted on basic science. I posted to biorxiv because I hoped that lots of people would read the work, comment on it, and we could have an interesting discussion. In some ways, I was hoping that the experience would mirror what I enjoy most about scientific meetings – presenting unpublished data and then having long, stimulating conversations about it. Perhaps that’s a good analogy – preprints could democratize unpublished data sharing at meetings, so that everyone in the world gets to participate and not just a few people in-the-know.

How well did it go? As of today the PDF of one paper has been downloaded about 230 times (I’m not counting abstract views), while the other was downloaded about 630 times. That’s nice – hundreds of people read the manuscripts before they were even published! But only one preprint has garnered a comment, and that one was not particularly useful: “A++++, would read again.” Even the twitter postings about each article were mostly ‘bots or colleagues just pointing to the preprint. I appreciate the kind words and attention, but where is the stimulating discussion? I’ve presented the same unpublished work at several meetings, and each time it led to some great questions, after-talk conversations, and has sparked a few nice collaborations. All of this discussion at meetings has led to additional experiments that strengthened the work and improved the versions we submitted to journals. But so far biorxiv seems to mostly be a platform for consumption rather than a place for two-way information flow.

Where does that leave my thoughts on preprints? I still love the idea of preprints as a mechanism for open sharing of unpublished data. But how can we build a community that not only reads preprints but also talks about them? Will I post more preprints on biorxiv? Maybe I’ll try again, but preprints are still an experiment rather than a resounding success.


PS – Most journals openly state that preprints do not conflict with eventual submission to a journal, but Cell Press has said that they consider preprints on a case-by-case basis. This has led to some avid preprinters declaring war against Cell Press’ “draconian” policies, assuming that the journals are out to kill preprints for profit motives alone. By contrast, I spoke at some length with a senior Cell Press editor about preprints in biology and had an incredibly stimulating phone call – the editor had thought about the issues around preprinting in great depth, probably even more thoroughly than the avid preprinters. I eventually submitted one of the preprinted works to a Cell Press journal without issue. Though I eventually moved the manuscript to another journal, that decision had nothing to do with the work having been preprinted.

X Close

Thinking about threats

Jacob Corn

In early February the Worldwide Threat Assessment listed CRISPR as a dual-use technology, one that could be used for either good or bad (think nuclear power)....

READ MORE

In early February the Worldwide Threat Assessment listed CRISPR as a dual-use technology, one that could be used for either good or bad (think nuclear power). At the end of that same month, a delegation from the intelligence community asked to meet with me.

Five years ago, I would never have dreamed of typing that previous sentence. Now it’s a day in the life.

I flew back from a Keystone meeting a day early (this one was my science vacation, just for fun) to meet with the group. I really didn’t know what to expect from the delegation. I had visions of people dressed in severe suits who wore dark glasses indoors and could read my email at the press of a button. Instead, I was treated to a lively, well-rounded group of scientists, ethicists, and economists. I could have imagined any one of them walking the halls of UC Berkeley as faculty. Though one’s business card was redacted in thick black sharpie (no joke).

We were  joined by Berkeley’s Director of Federal Relations and had an outstanding discussion lasting a few hours. I learned a lot from the group about how government educates itself and came away very impressed with the people the U.S. government charges with looking in to emerging technologies.

But I do have a point to make in bringing up this unusual visit.

As scientists, I think we should work responsibly with our new gene editing capabilities and be honest about the potential dangers. The whole point of next-gen gene editing is that it’s fast, cheap, and easy. I have undergraduates volunteering in my lab who edit their first genes within a month of joining. That’s exciting, but can also be scary. I agree with the threat assessment that CRISPR could be used for bad things, since it’s just a tool. In fact, it might even be possible to accidentally use CRISPR in a bad way. Think AAV editing experiments designed for mice that accidentally also target human sequences.

But like I said, gene editing is just a tool. A hammer can build a house, but it can also hit someone on the head. Likewise, gaining one tool doesn’t make everything easy. Try building a house with only a hammer.

Just because we now have democratized gene editing doesn’t mean that bad things will start popping up left and right. Bacterial engineering has been around for a long time, but it’s still hard to do bad things in that arena. There are many other barriers and bottlenecks in the way, and the same is true for bad guys who might try gene editing.

So what should we do? As gene editors, I think we should closely and enthusiastically engage with appropriate agencies. This includes federal and state bodies, and even local groups like campus EH&S. We should also be instilling a culture of responsibility and safety in the lab, even above and beyond normal safety. It’s one thing for a postdoc to remember their PPE, but it’s another thing to think to ask, “Should I talk to someone before I do this experiment?” Security through obscurity is not the way, but sometimes it really is better to first talk things through in a very wide forum. Remember the outcry about the H5N1 flu papers… 

The idea is not to scare people. The technology isn’t scary, and gene editing really isn’t new. It’s just easier and cheaper now, which changes the equation a bit. We should be open about risks and proactive about managing them, otherwise they’ll be managed for us.


Apologies for the long delay between posts. I was teaching this last semester and also trying to get three papers out the door. 

X Close

WaPo Guest blogging

Jacob Corn

Instead of our regularly scheduled (in reality intermittently posted) blog on this site, I’ll direct you to a guest blog I wrote for the Washington Post...

READ MORE

Instead of our regularly scheduled (in reality intermittently posted) blog on this site, I’ll direct you to a guest blog I wrote for the Washington Post. The topic is one I’ve touched on before: the biggest impact of Cas9 will not be in the clinic, but in its ability to accelerate fundamental biological discovery across numerous labs. Which is not to say that curing genetic disease is a small thing. It’s a sea-change for our relationship with our own genomes and something I’m personally very passionate about. But I think it’s undeniable that thousands of labs all using democratized gene editing to make new discoveries will snowball in a big way.

X Close

Filters

Tweets

Contact Us

Questions and/or comments about Corn Lab and its activities may be addressed to:

JCORN@BERKELEY.EDU

Share: