Author / Jacob Corn

Is Cas9 specific?

Jacob Corn

I started writing a lengthy analysis about whether or not Cas9 is specific. It contained several in-depth analyses of many papers. There were arguments for and against. But right in the middle I...


I started writing a lengthy analysis about whether or not Cas9 is specific. It contained several in-depth analyses of many papers. There were arguments for and against. But right in the middle I realized that the details of the literature surrounding specificity don't really matter and I deleted the whole thing. Here's why...

Cas9's specificity might be pretty interesting to you if you're creating cell lines for research use. After all, you don't want to be reporting a phenotype that actually stems from some off-target knockout. But if you're thinking about a gene correction therapy, specificity will keep you up at night in a cold sweat.

And that distinction, together with the mindset of research vs therapeutics, is the key. Several papers have shown that Cas9 is both moderately specific and moderately permissive. There's all kinds of literature about seed regions and sgRNA bubbles, and so on. But right now, we can say that sgRNAs can be at least pretty good, and it's hard to tell when they'll be bad.

So for research purposes, it doesn't matter whether or not Cas9 is highly specific, because you should just choose two distinct guides and demonstrate that your phenotype is robust to choice of guides. The chance that two guides of different sequence will have the same off-target is very low. This is much like what's done with genome-wide CRISPRcut/i/a libraries, but I think it should also extend to CRISPR-made cell lines and probably even animal models.

And for therapeutic purposes, it doesn't really matter what the literature says. Even if all papers everywhere said that Cas9 was absolutely stringent, you'd still need to demonstrate specificity (or at least knowledge of benign off-targets) for your application of interest. Anyone who would use a genome-targeting reagent in humans without careful homework on that reagent (regardless of literature precedent) has no business making therapeutics.


X Close

Living protocols for genome editing

Jacob Corn

The field of genome editing is moving at breakneck speed and protocols are rapidly evolving. We've already made a f...


The field of genome editing is moving at breakneck speed and protocols are rapidly evolving. We've already made a few different posts on tips, tricks, and protocols for genome editing and regulation. But effectively sharing protocols and making sure that they're up to date is a daunting task. Much better to have a community-driven effort, where a starter protocol can be tweaked and updated as new developments come along.

That's why I'm happy to share that we've recently started putting our methods on This is an open repository for protocols, which the great feature of "forking". This means you can start from a protocol that you like, tweak it as desired, make a record of the tweaks, and re-publish your changes. Everything is also linkable to a DOI, which means you can potentially reference online protocols from within papers.

IGI protocols for T7E1 assays, in vitro transcription of guide RNAs, Cas9 RNP nucleofection, and more are available at

Here's an explanatory video, from the team.


X Close

Scoring CRISPR libraries, part II

Jacob Corn

Following up on my previous post about genome-wide CRISPR libraries, I thought it would be useful to show a bit mor...


Following up on my previous post about genome-wide CRISPR libraries, I thought it would be useful to show a bit more.

There are many things to consider when doing library work, but two major ones are 

  1. How sure are you that a hit stems from on-target activity vs off-target trickery?
  2. What fraction of the library is functional?

On-vs-off target is the real worry, since you could spend a great deal of time chasing down spurious hits. CRISPR (and sh/siRNA) libraries tackle this problem with redundancy, and one should always require that a phenotype enrich multiple guides corresponding to the same gene. But in libraries with relatively low redundancy (e.g. GeCKOv1 only has 3-4 guides per gene), it's easy to become enamored by a hit with a red-hot phenotype but only one guide. 

The concern about functional fraction of the library is more technical, but impacts both ease of the screen and the redundancy point from above. If many of your guides are non-functional, all that extra work to clone and transduce your massive library vs a smaller one is wasted effort. Worse, your chance at redundancy is diminished with each non-functional guide.

With that in mind, here are updated distributions for existing genome-wide guide libraries targeting human cells. The "penalty" axis is in log scale, and the penalties are easily interpretable to highlight the class of the problem. For penalties, the tens place represents the score of a guide itself, 100s place represents number of intergenic off-targets, and 1000s place represents genic offtargets.

For example, Anything with log(penalty)=0-2 has no off-targets and could be OK, though guides have much higher chance to be completely non-functional as one approaches 2. log(penalty)=2-3 have intergenic off-targets, with each 100-spike an additional off-target hit (e.g. penalty of 100 = one off-target, 200 = two off-targets, etc). log(penalty)=3-4 contain Pol III terminator sequences and are probably never even transcribed. log(penalty)=4+ have genic off-targets, with each 100-spike an additional off-target that impinges on a gene (e.g. penalty of 1000 = one off-target, 2000 = two off-targets). These penalties compound, so a score of 2,354 means two genic off-targets, 3 intergenic off-targets, and a guide penalty of 54.

Note that calling genic vs intergenic is done using Ensembl data and is sensitive to the type of CRISPR experiment. CRISPRi looks for hits within -50 to +300 of a gene, while CRISPRcutting looks at exons (for the moment we'll leave aside the scary prospect of cutting within potentially functional intronic or UTR regions).

In general, things are looking pretty good for CRISPRi. There's a bit of an advantage here, since CRISPRi only seems to work in a narrow window around the transcription start site, and so off-targets are less likely to hit a gene. CRISPRcutting libraries are not doing all that well with off-targets in annotated exons, and only deeper per-guide analysis would tell whether guide redundancy takes care of mis-called phenotypes. It's nice to see that GeCKO has improved with v2 (e.g. got rid of terminator sequences), and hopefully v3 can get some of the genic off-targets under control.

I want to stress that all of these libraries work just fine and have been used successfully to give biological insight. But keep these guide properties in mind when working with each library and thinking about hits arising from their use.

crispri_scores wang_crisprcut_scoresgecko_crisprcut_scores1-1024x510




X Close

Coding for the wet lab

Jacob Corn

Large datasets are becoming the norm in modern biology, but I still see wet-lab students and postdocs who are stymied by anything that can't be accomplished in Excel. Should every grad student lear...


Large datasets are becoming the norm in modern biology, but I still see wet-lab students and postdocs who are stymied by anything that can't be accomplished in Excel. Should every grad student learn how to write code? Probably not. But those who know even the basics of programming will find that their lives are oh so much easier. Not only will they be able to automate analyses to turn a daunting (or impossible) task into an easy afternoon's work, they'll also be able to "speak the language", which will give them a leg up in describing problems and brainstorming solutions with dedicated computational biologists.

There are some nice pieces about this, but let me provide a few examples that should be more concrete to those involved in genome editing.

  • Let's say you're doing a CRISPRi screen. You've done some sequencing, removed constant regions, and now you have a file containing a huge number of protospacers. These should all map to guides and associated genes, which are listed in another file with the format <gene> <guide sequence>. You can write a very simple program to map guide sequences to genes and count the number of times each one appears.
  • Or maybe you've just edited a gene and you're interested in the distribution of indels around the cut site. It's not difficult to record the frequency of each insertion and deletion from the millions of reads in a next gen sequencing experiment.
  • Even more simple: You want to target a whole bunch of genes in a pathway, so plan to make 50 guides targeting 25 distinct genomic regions. You can easily automate the design all of the primers needed for the T7E1 assays.

Of course, you must also be aware of the necessity for good statistics. The ability to automate tasks is no replacement for a good computational biologist or biostatistician, who can analyze whether measurements are actually significant. This is a good example of knowing enough to get you into trouble; be aware of when to seek out an expert! But a few hours' work on a program can save days of trying to process data with inappropriate tools (e.g. Excel).

Personally, I think python is the way to go for most simple programming tasks. It's reasonably fast, pretty easy to learn, deals well with text, has some good biology-oriented modules (e.g. Biopython), and has a growing user base (making it easy to find help). Python certainly isn't the fastest, but the focus here is on writing quick one-offs to get a job done rather than making an ultra-fast production tool. General programming knowledge is probably sufficient for most tasks, and here are a few resources to get started:

To illustrate how simple a program can be, here's a script to accomplish the CRISPRi (or any guide library) guide-gene mapping/counting tasks described above. It's obsessively documented, to try to show how this can be quite straightforward. It is also written for ease of understanding by a beginner rather than speed or conciseness. Writing this took about 20 minutes.



X Close

More on the soon-to-be plateau of CRISPR/Cas9

Jacob Corn

This week's post is short, since I'll be traveling most of next week and needed the weekend to catch up on some data. Following up on


This week's post is short, since I'll be traveling most of next week and needed the weekend to catch up on some data. Following up on last week's post about the explosion of CRISPR/Cas9, I thought I'd use another RNAi parallel to illustrate what I mean by Cas9 genome engineering becoming commonplace.

Instead of searching anywhere in a paper, here's what a by-year search for "RNA interference" in just the title of a paper looks like: big spike, then gradual slow decline as the technology becomes commonplace. It's not that people started using RNAi any less often. In fact, I'm sure its use is only growing. But you don't put a common technique in the title (even if it's a linchpin) unless there's very good reason to do so.


Breaking RNA interference titles down by month looks like this.


And here's where we are for "Cas9" in the title of a paper, by month.


Notice the similarity between the recent number of Cas9-title publications and when RNA interference started showing up in titles less often.  One of my goals at the IGI is to hasten the day when genome editing is something everyone just does.

X Close

The explosion of CRISPR/Cas9

Jacob Corn

It seems like every time you turn around, there's another paper that mentions CRISPR/Cas9. Just how fast has the field exploded? There's nothing like data to find out! In the histogram below, each...


It seems like every time you turn around, there's another paper that mentions CRISPR/Cas9. Just how fast has the field exploded? There's nothing like data to find out! In the histogram below, each bar on the X-axis is a week (arbitrarily starting in late 2011) and the Y-axis is the number of papers in a Pubmed search for "CRISPR AND Cas9" (side note: one needs to include CRISPR in the search because "Cas9" spuriously includes results from a few labs that like to call caspases "Cas", e.g. caspase 9 = Cas9, caspase 3 = Cas3).


Your hunch was correct -- there IS a new paper mentioning CRISPR/Cas9 every time you turn around: several per day, in fact.

And I think (hope) that means we're reaching a tipping point. It's currently in vogue to sprinkle "Cas9" throughout a paper, even if the system is used as tool for biology and not itself the focus of the work, because it gets the attention of editors. But as more and more groups use Cas9, gene editing will become commonplace. Routine, even. And that's exactly what should happen! Can you imagine a paper these days that crows about how they used gel electrophoresis to separate proteins, or PCR and restriction enzymes to clone a gene? It's the natural course of things for disruptive technologies to quickly become just another part of the toolbox. RNAi for example, which completely upended biology not too long ago and is now used routinely and without fuss by most cell biology labs. Compare the plot above with the equivalent for RNAi (the search here was for "RNA interference" due to complications in searching "RNAi", since that also finds other hits on a viral RNA called "RNA-one". Also note that there are some false positives before Andy Fire's 1998 Nature paper).


I'm looking forward to the day that CRISPR/Cas9 becomes as common-place as RNAi, since it will mean we've arrived at a new another era of biology. Want to know what that conserved genetic element does? Just remove it. Want to find out what that conserved residue does in your favorite protein? Mutate it in your organism of interest. No big deal!

That's going to be an incredible time. I'm in this for the destination, not the vehicle.

For those interested, here's how to generate the histogram
(download medline format records for a pubmed search)

#!/usr/bin/env python
from Bio import Medline # requires Biopython
import datetime
import sys
fin = sys.argv[1]
with open(fin) as p:
    records = Medline.parse(p)
    for record in records:
        d = record['DA']
        d =[:4]), int(d[4:6]), int(d[6:8]))
        print d.toordinal()

(on the command line)
$ [medline format file you downloaded] > dates.txt
(in R)
# plotdates.R
d<-read.table("dates.txt", header=F)
ranges<-append(c(seq(min(d$V1), max(d$V1), by = 7)), max(d$V1))
hist(d$V1, breaks=ranges, freq=TRUE, col="blue")

X Close

What comes after Precision Medicine?

Jacob Corn

Lately I've been thinking quite a bit about the definition of Precision Medicine. This was brought about by the


Lately I've been thinking quite a bit about the definition of Precision Medicine. This was brought about by the CIAPM call for proposals and two associated workshops. The workshops have been stellar, with a feeling more of collaboration than competition. Over the two days, I got to thinking about what Precision Medicine seems to mean right now versus what it might mean in the future.

Wikipedia has an interesting contrast between Precision Medicine and Personalized Medicine. But both PMs are defined as not necessarily implying treatments that are customized for an individual (or even a subset of patients). Instead, they are focused on using large data sets (genomics, proteomics, other omics, health records, etc etc) to determine how some existing medicine should or should not be delivered to a patient.

Let's say you have a new drug that targets a particularly nasty form of cancer. PMs are currently focused on deciding how you're going to administer that drug - who will get it and who won't. That's very important, all the way from clinical trials into general use. Trials may read out falsely negative if people who have no hope of benefit are included in the trial, and since every drug has side effects it's best to pair a treatment with those for whom that risk:reward is favorable. But PMs are not about doing diagnostics on an individual patient and then custom-designing a new therapy for that patient.

This has all been weighing on my mind because of the two worlds in which I've worked. While I was in biopharma, we talked about Precision Medicine much like the above paragraph. "Precision" meant tens of thousands of people included, but excluding millions. Keith Yamamoto and Atul Butte (link probably obsolete soon, as Atul is now at UCSF) phrase this very nicely in terms of advancing human health by having the courage to tell people "No."

Now I work in a field in which people in my lab routinely design reagents capable of specifically targeting one gene and changing a single base. If even one person has a mutation that causes a disease, we could theoretically make a reagent to change that mutation within a week or so (in the lab!). We're not doing that in the clinic, but I think the writing is on the wall, and things might be different in a decade or so.

Widespread treatment-for-1 would be a major challenge, since it differs from normal medicine (and even Precision Medicine) in so many ways. How do we pay for such a thing? Will it be covered by insurance? What's the incentive for a drug company to make a therapy for only one customer? Or is the technology itself the product and the therapy just one small example? How should we regulate it? How do we know if an intervention is working? How do we know if it's safe? The N-of-1 trial might be the only thing possible, because there may be only one person with this particular mutation. 

As sequencing gets cheaper and faster, we'll quickly accumulate massive piles of data. That's currently a very centralized model, in which reams of data flow in to big centers and high-level rules for the application of treatment flow outwards. Turning the clock further forward, a decentralized future is also possible, in which hospital bedside sequencing informs programmable therapies that can be created in the very same hospital. Near-future Precision Medicine will save many lives, but a one-off-treatment system could fill in the corners to spell the end of orphan diseases. Science fiction at the moment, and much needs to happen to bring it about (not just biology, but engineering, regulation, and so on), but at least now we can see a path through the woods.

X Close

Our focus on the future present

Jacob Corn

It's been a rather wild ride in the last month, which hasn't left much time for blog posts. But I'm planning to  turn over a new leaf and start posting at least something short at the beginning...


It's been a rather wild ride in the last month, which hasn't left much time for blog posts. But I'm planning to  turn over a new leaf and start posting at least something short at the beginning of every week.

This week's post addresses a question that I've been asked in many ways by many people: what about germline editing? After the IGI started the ball rolling with a small meeting in Napa, we penned a call for a temporary moratorium on germline editing and have been lobbying for a larger summit, which is now slated for October. I think it likely that restriction or proscription of germline editing will be the outcome.

At this time, the IGI Lab will not do research on human germline editing for several reasons, including:

1. The IGI Lab is focusing on diseases for which somatic (non-heritable) editing would be a transformative advance. The media loves to talk about designer babies, but we actually don't know the first thing about the genetic basis behind complex traits like beauty or intelligence. But we do know a lot about genetic disease, particularly so-called monogenic disorders, in which a problem in a single gene causes the disease. Online Mendelian Inheritance in Man currently contains about 3,500 disorders that have a clinical phenotype for which the molecular basis is known. It's clear that we should start with one of these, such as sickle cell disease, cystic fibrosis, muscular dystrophy, or Huntington's disease. The thing is, curing most genetic diseases wouldn't require germline editing. Almost any hematopoietic disease could be cured non-heritably by taking a patient's bone marrow, performing gene correction, and then re-implanting the edited bone marrow. By now we're very good at bone marrow transplants. And once delivery systems are ironed out, even non-hematopoietic diseases could be cured in adults with gene correction therapy. But eventually achieving the above will take a lot of work. At the IGI Lab, we're focusing on that future transformation of genetic disease from something we treat with pallative care to something we cure.

2. Cas9 technology is currently too nascent for me to consider germline editing wise. Gene correction is still a relatively new field, with few clinical successes (or even attempts) to refer to.  And compared to other gene editing technologies, such as ZFNs or TALENs, Cas9 is the new kid on the block. There are just so many questions still outstanding about the technology, as evidenced by the huge surge of papers from all over the world that do nothing but figure out new things about Cas9: how does it find targets?, what do off-target sequences even look like?, what happens between cutting and the appearance of edits? At the IGI we spend a lot of time using Cas9 to do gene editing in somatic cells, and we've gotten very good at it (more on that when the papers come out). But sometimes we get surprised by the outcomes. That makes me nervous enough for somatic editing, and we obsessively characterize individual reagents for our clinical projects. But the Rumsfeldian Known Unknowns and Unknown Unknowns are too great in relation to a heritable change in someone's genome. When moving to the clinic, one should prefer a boring tech over one that's exciting and new but poorly understood, and if no boring tech exists then keep working. In the balance of impact vs risk, a person's life rests in one pan. 



X Close

Three timescales of impact for next-gen genome editing

Jacob Corn

This post expands on a slide that I often present in seminars: what is the scale (in time and impact) of next generation genome editing? I'm not restricting this to CRISPR/Cas9, because the field i...


This post expands on a slide that I often present in seminars: what is the scale (in time and impact) of next generation genome editing? I'm not restricting this to CRISPR/Cas9, because the field is moving so fast that it's anyone's guess whether we'll soon see a next next-gen (Cas10?). But the accelerator has been pressed firmly to the floor, and there's no going back. To avoid overuse of speculative words like "might" and "could", I'll just speak as if I have a crystal ball. But futurism is often a fallacy and the genome editing field is only 2 years old and moving very quickly so consider what's below a sketch at best and random guessing at worst.

Edit: Here I'm focusing on just a few areas out of many. There are very exciting things on the horizon for editing of crops and livestock, synthetic biology in normally difficult systems, and much more. I'm leaving all of that aside for now as fodder for another post.

Short: In the next few years I think we'll see greater adoption of genome editing in many labs, both academic and industrial. This will mostly be what I call "RNAi v2.0" -- disruption of  genes in a very fast and easy mode (either via CRISPRcutting or CRISPRinhibition). This will extend to both human cells and model organisms, but the scope accessible for reverse genetics will be greatly expanded. Now that more and more genomes are sequenced, we'll finally have a way to figure out what biologies underly all of those great annotations in those organisms (reverse genetics) or screen for which genes are responsible for incredible phenotypes (forward genetics). How do salamanders regenerate limbs? How do some fungi turn insects into zombies? What are the roles of genes expressed during Plasmodium infection? Does ablating gene X slow tumor progression in this model system? Are all of these genes really necessary for epithelial differentiation in the gut? These kinds of questions will be broadly answerable in both academic and industrial research settings: fundamental discoveries that will accelerate and broaden our understanding of the world around us.

Medium: Within five years true gene editing (surgically replacing one sequence with a defined replacement) will have matured and be as easy in human cells and model organisms as plasmid mutation currently is bacteria. We're already starting to see some hints of this on the horizon, so maybe this should even be in the "short" bin. But I think a lot of current work is focused on very low hanging fruit (important though it is), and there's still no clear path towards quickly and robustly engineering silent or deleterious variants, for example mutants with a fitness disadvantage. So this one goes into "medium term". Surgical introduction of mutation would be huge for any number of basic biologies, since it would enable one to readily ask reductionist and mechanistic questions in the context of a living cell or organism without confounding factors. On the translational front, in the medium term gene editing will totally change the way preclinical research is carried out. Custom-designed safety models (e.g. humanized rats), highly engineered cell lines to meld target and phenotypic screening, synthetic biology for enhanced drug production, and so on. People have been wanting to do these things for a long while and they might take a little longer to achieve in industry only because the focus will include robustness of the systems rather than purely speed, but they're coming. More relevant to the general public, in the medium term we'll start to see the widespread clinical emergence of ex vivo therapies that take advantage of gene editing, especially in the hematopoietic system. Clinical research and trials are already ongoing here (e.g. Sangamo's work with ZFN knockout of CCR5 for HIV), but now I'm talking about FDA approval and widespread use of an edited product as a therapeutic. The trial data has so far been very impressive on many fronts, but time will tell and the finish line is always further away than you think. 

Long:  Since the likelihood of anyone accurately predicting at this timescale is quite low, rather than make any specific predictions I'll instead wax philosophic. Here we're starting to talk about disruptive science fiction entering our lives in a real way. Things like in vivo editing in adult or postmitotic tissues. Sci-fi may actually be an apt comparison and offers a few positive examples of successful prognostication: Edward Bellamy predicted credit cards in 1888 and Arthur C. Clarke described communications satellites in 1945. And in a way, media of all kinds has been preparing us for genome editing for decades. I was recently asked how I explain what genome editing is and why it's practically beneficial. But the thing is, I actually don't need to do much explaining. I've talked about genome editing with taxi drivers, hair dressers, graphic designers, high school students, and Hollywood actresses. Everyone gets it right away. You don't need to know a thing about Cas9 or mechanisms of DNA break repair to understand genome editing. Most people very quickly understand what genome editing is and they see how much good it could do. But everyone also sees how much harm might come if we're reckless and how much care should be taken. So in the long term, our relationship with genetic diseases will fundamentally change. I'm not necessarily talking about germline editing, since one might have the same outcome with the ability to replace affected tissues with edited tissues. There is the opportunity for real and permanent cures for terrible diseases in which people currently just make do. That's powerful stuff. But it's a long road, and there's a lot left to be done. 

X Close

Pause and reflect before acting on the human germ line

Jacob Corn

There's been quite a lot of buzz around our recent Perspective piece in Science (Baltimore et al. with alphabetical author list, open a...


There's been quite a lot of buzz around our recent Perspective piece in Science (Baltimore et al. with alphabetical author list, open access for now), stemming from an IGI-organized bioethics workshop in Napa. Ed Lanphier et al were clearly thinking along the same lines, and wrote a similar article for Nature. The crux of the matter stems from CRISPR/Cas9's ease of use. Germ line genome engineering has suddenly become surprisingly easy in a variety of organisms, and the same may be true for the human germ line (there are rumors that some have already tried).

This is a very important time for science, and much rests on clear communication and open discourse. Since germ line edits would be heritable, we are literally talking about the ability to change human evolution faster than natural selection. Many have drawn parallels between the 1975 conference on recombinant DNA technologies in Asilomar, including ourselves (several attendees of the original Asilomar meeting were at the Napa workshop). Some worry about the futility of trying to put the genome engineering genie back in the bottle.

To be clear, our position is not a call to outright ban engineering of the germ line. Instead, we ask for a halt to experiments along these lines until a much larger meeting whose attendees represent a broad cross-section of scientific, clinical, ethical, and regulatory expertise. Whether or not individual researchers have performed human germ line editing, we must stop and ask ourselves hard questions before embarking on this path in earnest. Is it acceptable to cure genetic disease? What about the introduction of naturally occurring advantageous alleles (e.g. PCSK9 mutation)? If we proceed, what safety standards should be put in place? It would be wise to hash things out before acting, rather than repenting at leisure.

In addition to a larger meeting, broad communication about the science is absolutely critical. America is at a strange point: the majority of people believe that science is a good thing, but simultaneously disagree with scientists on several scientific issues. For the last few weeks I've been experiencing the edges of this phenomenon, when journalist after journalist asks me about designer babies. We must do an excellent job of providing high quality information to non-scientists about the genome engineering revolution in which we find ourselves. The goal is not to pedantically "educate" the public. I've found that everyone, from taxi drivers to accountants to personal trainers to librarians, quickly and easily grasps what human genome engineering is all about. Science fiction has been priming us for this moment for decades. The real question, which must be put to everyone, is how should we proceed now that it's real?

X Close



March 12, 2020 0 Comments

Welcome to Lena

Welcome to Lena Kobel, who joins the lab as a Cell Line Engineer. Lena has a long history in genome engineering, with previous experience in Martin Jinek’s...

October 16, 2018 2 Comments

Bootstrapping a lab

Today I’m going to talk about setting up a lab from a 10,000 foot view. I got thinking about this because my social media feed was recently filled with people announcing...

June 12, 2017 1 Comment

Shapers and Mechanists

There’s a series of cyberpunk short stories and a book written in the 1980s by Bruce Sterling called The Schismatrix. It centers around two major offshoots...

June 1, 2017 1 Comment

Backpacking season

It’s important to spend time outside the lab. And before you ask, that’s not why the blog has been dormant. I was teaching this last semester (a general biochemistry...

November 9, 2016 0 Comments

Sequence replacement to cure sickle cell disease

My lab recently published a paper, together with outstanding co-corresponding authors David Martin (CHORI) and Dana Carroll (University of Utah), in which...

September 12, 2016 1 Comment

Improved knockout with Cas9

Cas9 is usually pretty good at gene knockout. Except when it isn’t. Most people who have gotten their feet wet with gene editing have had an experience like that...

August 29, 2016 0 Comments

Safety for CRISPR

This post is all about establishing safety for CRISPR gene editing cures for human disease. Note that I did not say this post is about gene editing off-targets....

July 5, 2016 0 Comments

CAR-Ts and first-in-human CRISPR

(This post has been sitting in my outbox for a bit thanks to some exciting developments in the lab, so excuse any “dated” references that are off...

May 25, 2016 0 Comments

CRISPR Challenges – Imaging

This post is the first in a new, ongoing series: what are big challenges for CRISPR-based technologies, what progress have we made so far, and what might we look...

May 17, 2016 0 Comments

Ideas for better pre-prints

A few weeks ago, Jacob wrote a blog post about his recent experience with posting pre-prints to bioRxiv. His verdict? “…preprints are still an experiment rather...

Contact Us

Questions and/or comments about Corn Lab and its activities may be addressed to: