Blog / Making the Cut / papers

Improved knockout with Cas9

Jacob Corn

Cas9 is usually pretty good at gene knockout. Except when it isn’t. Most people who have gotten their feet wet with gene editing have had an experience like that in the following gel, in which so...

READ MORE

Cas9 is usually pretty good at gene knockout. Except when it isn’t. Most people who have gotten their feet wet with gene editing have had an experience like that in the following gel, in which some guides work very well but others are absolute dogs.

 
That’s a problem if you have targeting restrictions (e.g. when going after a functional domain instead of just making a randomly placed cut). So what can one do about it?
 

TL;DR Adding non-homologous single stranded DNA when using Cas9 RNP greatly boosts gene knockout.


 

The problem

There have been a few very nice papers showing that Cas9 prefers certain guides. I refer to these as the One True Guide hypothesis, with the idea being that Cas9 has somehow evolved to like some protospacers and dislike others. The data doesn’t lie, and there is indeed truth to this - Cas9 likes a G near the PAM and hates to use C. But guides that are highly active in one cell line are poor in others, and comparing very preference experiments in mouse cells vs worms gives very different answers. That’s not what you’d expect if the problem lies solely in Cas9’s ability to use a guide RNA to make a cut.
 
But of course, Cas9 is only making cuts. Everything else comes down to DNA repair by the host cell.
 

Our solution

In a new paper from my lab, just out in Nature Communications, we found that using a simple trick to mess with DNA repair can rescue totally inactive guides and make it easy to isolate knockout clones, even in challenging (e.g. polyploid) contexts. We call this approach “NOE”, for Non-homologous Oligonucleotide Enhancement.
(The acronym is actually a bit of a private joke for me, since I used to work with NOEs in a very different context, and Noe Valley is a nice little neighborhood in San Francisco)
 
How does one perform NOE? It’s actually super simple. When using Cas9 RNPs for editing, just add non-homologous single stranded DNA to your electroporation reaction. That’s it. This increases indel frequencies several fold in a wide variety of cell lines and makes it easy to find homozygous knockouts even when using guides that normally perform poorly.
 
The key to NOE is having extra DNA ends. Single stranded DNA works the best, and even homologous ssDNAs one might use for HDR work. We tend to use ssDNAs that are not homologous to the human genome (e.g. a bit of sequence from BFP) because they make editing outcomes much simpler (NHEJ only instead of NHEJ + HDR). But double stranded DNAs also work, and even sheared salmon sperm DNA does the trick! Plasmids are no good, since there are no free ends.
 
We know that NOE is doing something to DNA repair, because while this works in many cell lines, the molecular outcomes differ between cells! In many cells (5/7 that we’ve tested), NOE causes the appearance of very large deletions (much larger than you would normally see when using Cas9). But in 2/7 cells tested, NOE instead caused the cells to start scavenging little pieces of double stranded DNA and dropping them into the Cas9 break! The junctions of these pieces of DNA look like microhomologies, but we haven’t yet done the genetic experiments to say that this is caused by a process such as microhomology mediated end joining.
 

What's going on here?

How can making alterations in DNA repair so drastically impact the apparent efficacy of a given guide? We think that our data, together with data from other labs, implies that Cas9 cuts are frequently perfectly repaired. But this introduces a futile cycle, in which Cas9 re-binds and re-cuts that same site. The only way we observe editing is when this cycle is exited through imperfect repair, resulting in an indel. Perfect repair makes a lot of sense for normal DNA processing, since we accumulate DNA damage all the time in our normal lives. We'd be in a sorry state indeed if this damage frequently resulted in indels. It seems that NOE either inhibits perfect repair (e.g. titrating out Ku?) or enhances imperfect repair (e.g. stimulating an ATM response?), though we are still lacking direct data on mechanism at the moment.
cas9cycle
 

What is it good for?

The ability to stimulate incorporation of double stranded DNA into a break might be useful, since non-homologous or microhomology-mediated integration of double stranded cassettes has recently been used for gene tagging. But we haven’t explicitly tried this. We have also found NOE to be very useful for arrayed screening, in which efficiency of the edit is key to phenotypic penetrance and subsequent hit calling.
 
Importantly, NOE seems to work in primary cells, including hematopoietic stem cells and T cells. We’ve been using it when doing pooled edits in unculturable primary human cells, and find that far higher fractions of cells have gene disruptions when using NOE. We’ve so far only worked in human cells with RNP, and I’m very interested to hear peoples’ experience using NOE in other organisms. We haven’t had much luck trying it with plasmid-based expression of Cas9, but other groups have told me that they can get it to work in that context as well. 
 

How do I try it?

So if you’re interested, give it a shot. The details are all in our recent Nature Communications paper, but feel free to reach out if you have any more questions. This work was done by Chris Richardson (the postdoc who brought you flap-annealing HDR donors), Jordan Ray (an outstanding undergrad who is now a grad student at MIT), and Nick Bray (a postdoc bioinformatics guru).

X Close

CRISPR Challenges – Imaging

Jacob Corn

This post is the first in a new, ongoing series: what are big challenges for CRISPR-based technologies, what progress have we made so far, and what might we look forward to in the near future? I’...

READ MORE

This post is the first in a new, ongoing series: what are big challenges for CRISPR-based technologies, what progress have we made so far, and what might we look forward to in the near future? I’ll keep posting in this series on an irregular basis, so stay tuned for your favorite topic. These posts aren't meant to belittle any of the amazing advances made so far in these various sub-fields, but to look ahead to all the good things on the horizon. I’m certain these issues are front and center in the minds of people working in these fields, and this series of posts is aimed to bring casual readers up to speed with what’s going to be hot.

First up is CRISPR imaging, in which Cas proteins are used to visualize some cellular component in either fixed or live cells. This is a hugely exciting area. 3C/4C/Hi-C/XYZ-C technologies give great insight into the proximity of two loci averaged over large numbers of cells at a given time point. But what happens in each individual cell? Or in real time? We already know that location matters, but we’re just scratching the surface on what, when, how, or why.

CRISPR imaging got started when Stanley Qi and Bo Huang fused GFP to catalytically inactivated dCas9 to look at telomeres in living cells. Since then, we’ve seen similar approaches (fluorescent proteins or dyes brought to a region through Cas9) and a lot of creativity used to multiplex up to three colors. There’s a lot more out there, but I want to focus on the future...

What's the major challenge for live cell CRISPR imaging in the near future?

Sensitivity

Most CRISPR imaging techniques have trouble with signal to noise. It is so far not possible to see a fluorescent Cas9 binding a single copy locus when there are so many Cas9 molecules floating around the nucleus.  So far imaging has side-stepped signal to noise by either targeting repeat sequences (putting multiple fluorescent Cas9s in one spot) or recruiting multiple fluorophores to one Cas9. Even then, most CRISPR imaging systems rely on leaky expression from uninduced inducible promoters to keep Cas9 copy number on par with even repetitive loci.  Single molecule imaging of Halo-Cas9 has been done in live cells, but again only at repeats. Even fixed cell imaging has trouble with non-repetitive loci. Sensitivity is also a problem for RCas9 imaging - this innovation allowed researchers to use Cas9 directed to specific RNAs to follow transcripts in living cells. But it was mostly explored with highly expressed (e.g. GAPDH) or highly concentrated (e.g. stress granule) RNAs. How can we track a single copy locus, or ideally multiple loci simultaneously, to see how nuclear organization changes over time?

Someone’s going to crack the sensitivity problem, allowing people to watch genomic loci in living cells in real time. Will we learn how intergenic variants alter nuclear organization to induce disease? Will we see noncoding RNAs interacting with target mRNAs during development? With applications this big, I know many people are working on the problem and I’m sure there will be some big developments soon.

X Close

Ideas for better pre-prints

Benjamin Gowen

A few weeks ago, Jacob wrote a blog post about his recent experience with posting pre-prints to bioRxiv. His verdict? “…preprints ar...

READ MORE

A few weeks ago, Jacob wrote a blog post about his recent experience with posting pre-prints to bioRxiv. His verdict? “…preprints are still an experiment rather than a resounding success.” That sounds about right to me. I’m bearish on pre-prints right now because the very word implies that the “real” product will be the one that eventually appears “in print”. Don’t get me wrong--I think posting pre-prints is a great step toward more openness in biology, and I applaud the people who post their research to pre-print servers. Pre-prints are also a nice work-around to the increasingly long time between a manuscript’s submission and its final acceptance in traditional journals; posting a pre-print allows important results to be shared more quickly. There’s a lot of room for improvement, though. With some changes, I think pre-print servers could better encourage a real conversation between a manuscript’s authors and readers. Here are some of my thoughts on how they might achieve that. I know there are several flavors of pre-print servers out there, but for this post I’m going to use bioRxiv for my examples.

 

Improve readability

It’s 2016, we’ve got undergraduates doing gene editing, but most scientific publications are still optimized for reading on an 8.5x11” piece of paper. Pre-prints tend to be even less readable--figures at the end of the document, with legends on a separate page. The format discourages casual browsing of pre-prints, and it ensures the pre-print will be ignored as soon as a nicely typeset version is available elsewhere. I will buy a nice dinner for anyone who can make pre-prints display like a published article viewed with eLife Lens.

 

Better editability

bioRxiv allows revised articles to be posted prior to publication in a journal, but I would like a format that makes it really easy for authors to improve their articles. Wikipedia is a great model for how this could work. On Wikipedia, the talk page allows readers and authors to discuss ways to improve an article. The history of edits to a page shows how an article evolves over time and can give authors credit for addressing issues raised by their peers. Maintaining good version history prevents authors from posting shoddy work, fixing it later, and claiming priority based on when the original, incomplete version of the article was posted.

 

Crowd-source peer review

Anyone filling in a reCAPTCHA to prove they’re not a robot could be helping improve Google Maps or digitize a book. What if pre-print servers asked users questions aimed at improving an article? Is this figure well-labeled? Does this experiment have all of the necessary controls? What statistical test is appropriate for this experiment? With data from many readers about very specific pieces of an article, authors could see a list of what their audience wants. It looks like we need to repeat the experiments in Figure 2 with additional controls. Everybody likes the experiments in Figure 3, but they hate the way the data are presented.

 

Become the version of record

Okay, this one’s a definitely a stretch goal. Right now pre-prints get superseded by the “print” version of the article, but that doesn’t need to be the case. Let’s imagine a rosy future in which articles on bioRxiv are kept completely up-to-date. Articles are typeset through Lens, making them more readable than a journal’s PDF. There’s a thriving “talk” page where readers can post comments or criticisms. Maybe the authors do a new experiment to address readers’ comments, and it’s far easier to update the bioRxiv article than to change the journal version. At that point, bioRxiv would become the best place to browse the latest research or make a deep dive into the literature. Traditional journals could still post their own versions of articles, provided they properly cite the original work, of course.

X Close

Why preprints in biology?

Jacob Corn

I'm going to take a step away from CRISPR for a moment and instead discuss preprints in biology. Physicists, mathematicians, and astronomers have been posting manuscripts online before peer-revie...

READ MORE

I'm going to take a step away from CRISPR for a moment and instead discuss preprints in biology. Physicists, mathematicians, and astronomers have been posting manuscripts online before peer-reviewed publication for quite a while on arxiv.org. Biologists have recently gotten in on the act with CSHL's biorxiv.org, but there are others such as PeerJ. At first the main posters were computational biologists, but a recent check shows manuscripts in evo-devo, gene editing, and stem cell biology. The preprint crowd has been quite active lately, with a meeting at HHMI and a l33t-speak hashtag #pr33ps on twitter.

I recently experimented with preprints by posting two of my lab's papers on biorxiv: non-homologous oligos subvert DNA repair to increase knockout events in challenging contexts, and using the Cas9 RNP for cheap and rapid sequence replacement in human hematopoietic stem cells. Why did I do this, and how did it go?

There's been some divisive opinions around whether or not preprints are a good thing. Do they establish fair precedence for a piece of work and get valuable information into the community faster than slow-as-molasses peer review? Or do they confuse the literature and encourage speed over solid science?

In thinking about this, I've tried to divorce the issue of preprints from that of for-profit scientific publication. I found that doing so clarified the issue a lot in my mind.

Why try posting a preprint? Because it represents the way I want science to look. While a group leader in industry, I was comfortable with relative secrecy. We published a lot, but there were also things that my group did not immediately share because our focus was on making therapies for patients. But in academia, sharing and advancing human knowledge are fundamental to the whole endeavor. Secrecy, precedence, and so on are just career-oriented externalities bolted on basic science. I posted to biorxiv because I hoped that lots of people would read the work, comment on it, and we could have an interesting discussion. In some ways, I was hoping that the experience would mirror what I enjoy most about scientific meetings - presenting unpublished data and then having long, stimulating conversations about it. Perhaps that's a good analogy - preprints could democratize unpublished data sharing at meetings, so that everyone in the world gets to participate and not just a few people in-the-know.

How well did it go? As of today the PDF of one paper has been downloaded about 230 times (I'm not counting abstract views), while the other was downloaded about 630 times. That's nice - hundreds of people read the manuscripts before they were even published! But only one preprint has garnered a comment, and that one was not particularly useful: "A++++, would read again." Even the twitter postings about each article were mostly 'bots or colleagues just pointing to the preprint. I appreciate the kind words and attention, but where is the stimulating discussion? I've presented the same unpublished work at several meetings, and each time it led to some great questions, after-talk conversations, and has sparked a few nice collaborations. All of this discussion at meetings has led to additional experiments that strengthened the work and improved the versions we submitted to journals. But so far biorxiv seems to mostly be a platform for consumption rather than a place for two-way information flow.

Where does that leave my thoughts on preprints? I still love the idea of preprints as a mechanism for open sharing of unpublished data. But how can we build a community that not only reads preprints but also talks about them? Will I post more preprints on biorxiv? Maybe I'll try again, but preprints are still an experiment rather than a resounding success.


PS - Most journals openly state that preprints do not conflict with eventual submission to a journal, but Cell Press has said that they consider preprints on a case-by-case basis. This has led to some avid preprinters declaring war against Cell Press' "draconian" policies, assuming that the journals are out to kill preprints for profit motives alone. By contrast, I spoke at some length with a senior Cell Press editor about preprints in biology and had an incredibly stimulating phone call - the editor had thought about the issues around preprinting in great depth, probably even more thoroughly than the avid preprinters. I eventually submitted one of the preprinted works to a Cell Press journal without issue. Though I eventually moved the manuscript to another journal, that decision had nothing to do with the work having been preprinted.

X Close

Cas9 mouse

Jacob Corn

I just had a great conversation with Dana Carroll and Mark...

READ MORE

I just had a great conversation with Dana Carroll and Mark DeWitt about ways one might improve homology-directed repair. But the proof will be in the pudding. I'm sure many (many, many, many) groups are working to crack this particular nut, but so far it's been recalcitrant. Cells just love to use NHEJ instead of HDR, which is great for making knockouts but not so good for true editing.

Lots of exciting advances out in the literature, including the Cas9 mouse and genome-wide screening with CRISPRi+CRISPRa. More on the screening at a later date, but how about that mouse?

The Cas9 mouse is a great proof-of-concept tool that brings mouse genetics to the people and I think will both accelerate research and decrease frustration. Cutting in the brain looks outstanding, and making knockouts in primary immune cells was a great idea. Editing in the lung looks a bit more iffy, and from Figure 1 it's not clear to me whether the problem is AAV delivery of sgRNA or Cas9 expression (highest in brain but relatively low in lung). Regardless, getting simultaneous knockout of two genes and editing of another is quite a feat! But the very mosaic nature of the mutations will probably restrict this to making changes that lead to hyperproliferation, such as loss-of-function tumor suppressors or gain-of-function oncogenes.  Though I wouldn't bank the farm on results from knockouts in Cas9 mice alone, one potential utility is proof-of-concept to trigger a traditional transgenic animal. It's not uncommon to spend the huge time and effort involved in making a transgenic mouse but find no phenotype. With the Cas9 mouse, one could test several candidates for phenotype and then make the clean transgenic (including backcrosses) in promising-looking cases. And with more efficient editing (coming in v2?), this might truly be a game-changer.

X Close

Watching Cas9 read a PAM

Jacob Corn

At the risk of becoming structure-centric (three crystal structures in four posts), I couldn't pass up commenting on the Jinek La...

READ MORE

At the risk of becoming structure-centric (three crystal structures in four posts), I couldn't pass up commenting on the Jinek Lab's beautiful structure of SpyCas9 in complex with sgRNA and target DNA including an NGG PAM. One big take-home here is that the PAM is read out by two arginines (1333 and 1335) and a lysine (1107). The PAM itself is hybridized to the non-target DNA strand, but the DNA immediately downstream is flipped 180 degrees so that it can be read out by the RNA protospacer! This bit of structural juggling is made possible by a serine, which together with a backbone contact forms an interaction the authors term a "phosphate lock". 

Like any great science this paper raises as many questions as it answers. Mutating the PAM-contacting arginines to alanine abolishes DNA binding, but switching them to the identities found in organisms with non-NGG PAMs doesn't switch PAM specificity. So is it even possible to make SpyCas9 recognize other PAMs? You can bet that many groups are working on that very problem, and I'm sure this paper has given them some ideas.

X Close

A veritable downpour

Jacob Corn

Following up on last week's post on structures of the E. coli Cascade complex, I just realized that I missed <...

READ MORE

Following up on last week's post on structures of the E. coli Cascade complex, I just realized that I missed Blake Wiedenheft's structure in Science. Blake determined the original Cascade cryoEM envelope (at 8Å no less!), so it's quite nice that his group also solved one of these beautiful crystal structures. It's funny how high-impact structures sometimes come in waves. There seem to be many examples in the literature of long-awaited structures suddenly being cracked by multiple groups simultaneously. Is it game-changing technical advances, new biological insight, or just synchronicity bias?

X Close

When it rains it Cascades

Jacob Corn

This week saw not one, but two papers with structures of the E. coli Cascade complex from the labs of Yanli Wan...

READ MORE

This week saw not one, but two papers with structures of the E. coli Cascade complex from the labs of Yanli Wang (Nature) and Scott Bailey (Science). Cascade is a bit like Cas9, in that it's a bacterial immunity endonuclease targeted via CRISPR nucleic acid, but far more complex. While Cas9 is a single protein (and hence attractive for genome engineering), Cascade is 405 kDa split over 11 separate polypeptides and 5 open reading frames. In both structures, the crRNA is stretched out across the entire complex. The structure from Bailey's group also has ssDNA bound, and while it generally follows the path of the crRNA, kinking and base flipping allows the pairing to severely underwind into a ribbon. As is often the case in a large, complex structure like this, there are all kinds of exciting bits to poke into and look at to explain existing biochemical data. I'm looking forward to carefully reading both papers and playing with the structures when they're released. Kudos to both groups!

On a side note, these are huge ~ 3 A structures that also contain nucleic acid, yet both are refined to levels that would have been unthinkable just a few years ago: R/Rfrees of 22.5/29.9 and 20.7/16.4! Of course there's more to structure quality (and a structure) than the R-stats. But still, it's astounding.

X Close

Filters

Tweets

March 12, 2020 0 Comments

Welcome to Lena

Welcome to Lena Kobel, who joins the lab as a Cell Line Engineer. Lena has a long history in genome engineering, with previous experience in Martin Jinek’s...

October 16, 2018 2 Comments

Bootstrapping a lab

Today I’m going to talk about setting up a lab from a 10,000 foot view. I got thinking about this because my social media feed was recently filled with people announcing...

June 12, 2017 1 Comment

Shapers and Mechanists

There’s a series of cyberpunk short stories and a book written in the 1980s by Bruce Sterling called The Schismatrix. It centers around two major offshoots...

June 1, 2017 1 Comment

Backpacking season

It’s important to spend time outside the lab. And before you ask, that’s not why the blog has been dormant. I was teaching this last semester (a general biochemistry...

November 9, 2016 0 Comments

Sequence replacement to cure sickle cell disease

My lab recently published a paper, together with outstanding co-corresponding authors David Martin (CHORI) and Dana Carroll (University of Utah), in which...

September 12, 2016 1 Comment

Improved knockout with Cas9

Cas9 is usually pretty good at gene knockout. Except when it isn’t. Most people who have gotten their feet wet with gene editing have had an experience like that...

August 29, 2016 0 Comments

Safety for CRISPR

This post is all about establishing safety for CRISPR gene editing cures for human disease. Note that I did not say this post is about gene editing off-targets....

July 5, 2016 0 Comments

CAR-Ts and first-in-human CRISPR

(This post has been sitting in my outbox for a bit thanks to some exciting developments in the lab, so excuse any “dated” references that are off...

May 25, 2016 0 Comments

CRISPR Challenges – Imaging

This post is the first in a new, ongoing series: what are big challenges for CRISPR-based technologies, what progress have we made so far, and what might we look...

May 17, 2016 0 Comments

Ideas for better pre-prints

A few weeks ago, Jacob wrote a blog post about his recent experience with posting pre-prints to bioRxiv. His verdict? “…preprints are still an experiment rather...

Contact Us

Questions and/or comments about Corn Lab and its activities may be addressed to:

JACOB.CORN@BIOL.ETHZ.CH

Share: