Blog / Making the Cut

Is Cas9 specific?

Jacob Corn

I started writing a lengthy analysis about whether or not Cas9 is specific. It contained several in-depth analyses of many papers. There were arguments...

READ MORE

I started writing a lengthy analysis about whether or not Cas9 is specific. It contained several in-depth analyses of many papers. There were arguments for and against. But right in the middle I realized that the details of the literature surrounding specificity don’t really matter and I deleted the whole thing. Here’s why…

Cas9’s specificity might be pretty interesting to you if you’re creating cell lines for research use. After all, you don’t want to be reporting a phenotype that actually stems from some off-target knockout. But if you’re thinking about a gene correction therapy, specificity will keep you up at night in a cold sweat.

And that distinction, together with the mindset of research vs therapeutics, is the key. Several papers have shown that Cas9 is both moderately specific and moderately permissive. There’s all kinds of literature about seed regions and sgRNA bubbles, and so on. But right now, we can say that sgRNAs can be at least pretty good, and it’s hard to tell when they’ll be bad.

So for research purposes, it doesn’t matter whether or not Cas9 is highly specific, because you should just choose two distinct guides and demonstrate that your phenotype is robust to choice of guides. The chance that two guides of different sequence will have the same off-target is very low. This is much like what’s done with genome-wide CRISPRcut/i/a libraries, but I think it should also extend to CRISPR-made cell lines and probably even animal models.

And for therapeutic purposes, it doesn’t really matter what the literature says. Even if all papers everywhere said that Cas9 was absolutely stringent, you’d still need to demonstrate specificity (or at least knowledge of benign off-targets) for your application of interest. Anyone who would use a genome-targeting reagent in humans without careful homework on that reagent (regardless of literature precedent) has no business making therapeutics.

 

X Close

Living protocols for genome editing

Jacob Corn

The field of genome editing is moving at breakneck speed and protocols are rapidly evolving. We’ve already made a few different posts on tips, tricks,...

READ MORE

The field of genome editing is moving at breakneck speed and protocols are rapidly evolving. We’ve already made a few different posts on tips, tricks, and protocols for genome editing and regulation. But effectively sharing protocols and making sure that they’re up to date is a daunting task. Much better to have a community-driven effort, where a starter protocol can be tweaked and updated as new developments come along.

That’s why I’m happy to share that we’ve recently started putting our methods on Protocols.io. This is an open repository for protocols, which the great feature of “forking”. This means you can start from a protocol that you like, tweak it as desired, make a record of the tweaks, and re-publish your changes. Everything is also linkable to a DOI, which means you can potentially reference online protocols from within papers.

IGI protocols for T7E1 assays, in vitro transcription of guide RNAs, Cas9 RNP nucleofection, and more are available at https://www.protocols.io/g/innovative-genomics-initiative/protocols

Here’s an explanatory video, from the protocols.io team.

 

X Close

Adventures in CRISPR library preparation

Benjamin Gowen

For the last couple of months, a few of us at the IGI have been generating new sgRNA libraries for CRISPRi and CRISPRa. After scraping colonies off of nearly...

READ MORE

For the last couple of months, a few of us at the IGI have been generating new sgRNA libraries for CRISPRi and CRISPRa. After scraping colonies off of nearly one hundred extra-large LB-Agar plates, it was time to fill the lab with the sweet smell of lysed bacteria and DNA prep buffers. We were working with 21 separate sublibraries, totaling around 250,000 sgRNAs. Plasmid prep on this scale is a completely different beast from anything I had done before, so we decided to share some thoughts on what works (and what doesn’t!) for efficiently prepping sgRNA libraries.

Prepping the work station

We were worried about other plasmids sneaking into our preps–especially individual sgRNA plasmids that get used frequently in our lab. We doused and scrubbed our benches and vacuum manifold with 70 % ethanol and RNase-Away before starting, and a few times throughout the day. This should hopefully destroy or denature any stray plasmids hanging around. It’s also worth cleaning out your vacuum trap and putting fresh filters in the vacuum line, since old dirty filters can really weaken vacuum power.

Do all the DNA prep at once

For me, it’s much more efficient to spend a couple of days solely devoted to high-throughput DNA prep than to spread the work out over several days, a few columns at a time. 

Teamwork

The initial lysis and neutralization steps in most plasmid preps are time-sensitive, so there’s a limit on how many samples one person can process at once. We found that a team of 3 people (each processing 8 samples at once) maximized our throughput without us bumping into each other too much. After eluting DNA off the columns, once person can manage the DNA precipitation while others start on the next round of samples.

Starting material

Scraping the colonies off of a 23×23 cm LB-Agar plate gave us an average bacterial pellet mass of 1.1 g (range 0.5-1.6 g). This meant that each plate of bugs got its own maxiprep column (see below for kit recommendations). If you’re working with bugs from liquid culture or other plate sizes, you can pool or aliquot the samples to get a similar pellet mass per column.

Plasmid prep kits

We wound up trying several different plasmid prep kits, and the clear winner in our hands was the Sigma GenElute HP Plasmid Maxiprep Kit. The columns are compatible with the QIAGEN 24-port vacuum manifold we already had in the lab, the protocol was amenable to doing 24 preps in a batch, and the house vacuum system in our building was strong enough to pull liquid through all 24 columns at once. Importantly, all of the columns ran consistently and reasonably quickly. One slow or plugged column is an annoying but solvable problem when doing 4 or 5 preps, but it can really back up the pipeline when doing multiple batches of 24. Our  average yield from this kit was 1.4 mg per prep.

Kits to avoid:

  • Sigma GenElute HP Plasmid Megaprep: Sigma advertises 4 times the yield from a megaprep column compared to their maxipreps. Some of our samples could be pooled, so we thought pooling 4 samples into one megaprep would be faster than running them as 4 individual maxipreps. Boy were we wrong! The megapreps had to be processed one or two at a time, and thus didn’t scale well at all. Worst of all, the megaprep columns were NOT compatible with the QIAGEN vacuum manifold. We managed to fix this with tubing and adapters, but the house vacuum system was only strong enough to pull on one or two of the larger megaprep columns at a time. For us, mega preps took far more time and gave about half the yield we would have expected from just grinding through 4 times as many maxipreps.
  • QIAGEN Plasmid Plus Maxiprep Kit: 1 out of the 8 columns we used  stalled while running the cleared lysate. That column had to be left on the vacuum overnight. Our yields were also lower than the Sigma maxipreps. 
  • QIAGEN HiSpeed Plasmid Maxiprep Kit: These don’t scale well at all. The columns aren’t compatible with a vacuum manifold, and the QIAprecipitator syringe filters require a lot of manipulations to each individual sample. After the first 4 samples, I ditched the QIAprecipitator step altogether. Precipitating the DNA with a 45 minute spin was much faster when dealing with 10 or 20 preps at once.

We’re always interested in ways to make the next sgRNA library prep easier than the last. If you have your own favorite plasmid prep kit or other tricks for efficient library preparation, feel free to leave a comment. 

Special thanks to the other members of Team DNA Prep–Gemma Curie, Amos Liang, and Emily Lingeman. I’d still be running maxipreps if it weren’t for them!

X Close

Scoring CRISPR libraries, part II

Jacob Corn

Following up on my previous post about genome-wide CRISPR libraries, I thought it would be useful to show a bit more.

There are many things to consider when...

READ MORE

Following up on my previous post about genome-wide CRISPR libraries, I thought it would be useful to show a bit more.

There are many things to consider when doing library work, but two major ones are 

  1. How sure are you that a hit stems from on-target activity vs off-target trickery?
  2. What fraction of the library is functional?

On-vs-off target is the real worry, since you could spend a great deal of time chasing down spurious hits. CRISPR (and sh/siRNA) libraries tackle this problem with redundancy, and one should always require that a phenotype enrich multiple guides corresponding to the same gene. But in libraries with relatively low redundancy (e.g. GeCKOv1 only has 3-4 guides per gene), it’s easy to become enamored by a hit with a red-hot phenotype but only one guide. 

The concern about functional fraction of the library is more technical, but impacts both ease of the screen and the redundancy point from above. If many of your guides are non-functional, all that extra work to clone and transduce your massive library vs a smaller one is wasted effort. Worse, your chance at redundancy is diminished with each non-functional guide.

With that in mind, here are updated distributions for existing genome-wide guide libraries targeting human cells. The “penalty” axis is in log scale, and the penalties are easily interpretable to highlight the class of the problem. For penalties, the tens place represents the score of a guide itself, 100s place represents number of intergenic off-targets, and 1000s place represents genic offtargets.

For example, Anything with log(penalty)=0-2 has no off-targets and could be OK, though guides have much higher chance to be completely non-functional as one approaches 2. log(penalty)=2-3 have intergenic off-targets, with each 100-spike an additional off-target hit (e.g. penalty of 100 = one off-target, 200 = two off-targets, etc). log(penalty)=3-4 contain Pol III terminator sequences and are probably never even transcribed. log(penalty)=4+ have genic off-targets, with each 100-spike an additional off-target that impinges on a gene (e.g. penalty of 1000 = one off-target, 2000 = two off-targets). These penalties compound, so a score of 2,354 means two genic off-targets, 3 intergenic off-targets, and a guide penalty of 54.

Note that calling genic vs intergenic is done using Ensembl data and is sensitive to the type of CRISPR experiment. CRISPRi looks for hits within -50 to +300 of a gene, while CRISPRcutting looks at exons (for the moment we’ll leave aside the scary prospect of cutting within potentially functional intronic or UTR regions).

In general, things are looking pretty good for CRISPRi. There’s a bit of an advantage here, since CRISPRi only seems to work in a narrow window around the transcription start site, and so off-targets are less likely to hit a gene. CRISPRcutting libraries are not doing all that well with off-targets in annotated exons, and only deeper per-guide analysis would tell whether guide redundancy takes care of mis-called phenotypes. It’s nice to see that GeCKO has improved with v2 (e.g. got rid of terminator sequences), and hopefully v3 can get some of the genic off-targets under control.

I want to stress that all of these libraries work just fine and have been used successfully to give biological insight. But keep these guide properties in mind when working with each library and thinking about hits arising from their use.

crispri_scores wang_crisprcut_scoresgecko_crisprcut_scores1-1024x510

 

 

 

X Close

Coding for the wet lab

Jacob Corn

Large datasets are becoming the norm in modern biology, but I still see wet-lab students and postdocs who are stymied by anything that can’t be accomplished...

READ MORE

Large datasets are becoming the norm in modern biology, but I still see wet-lab students and postdocs who are stymied by anything that can’t be accomplished in Excel. Should every grad student learn how to write code? Probably not. But those who know even the basics of programming will find that their lives are oh so much easier. Not only will they be able to automate analyses to turn a daunting (or impossible) task into an easy afternoon’s work, they’ll also be able to “speak the language”, which will give them a leg up in describing problems and brainstorming solutions with dedicated computational biologists.

There are some nice pieces about this, but let me provide a few examples that should be more concrete to those involved in genome editing.

  • Let’s say you’re doing a CRISPRi screen. You’ve done some sequencing, removed constant regions, and now you have a file containing a huge number of protospacers. These should all map to guides and associated genes, which are listed in another file with the format <gene> <guide sequence>. You can write a very simple program to map guide sequences to genes and count the number of times each one appears.
  • Or maybe you’ve just edited a gene and you’re interested in the distribution of indels around the cut site. It’s not difficult to record the frequency of each insertion and deletion from the millions of reads in a next gen sequencing experiment.
  • Even more simple: You want to target a whole bunch of genes in a pathway, so plan to make 50 guides targeting 25 distinct genomic regions. You can easily automate the design all of the primers needed for the T7E1 assays.

Of course, you must also be aware of the necessity for good statistics. The ability to automate tasks is no replacement for a good computational biologist or biostatistician, who can analyze whether measurements are actually significant. This is a good example of knowing enough to get you into trouble; be aware of when to seek out an expert! But a few hours’ work on a program can save days of trying to process data with inappropriate tools (e.g. Excel).

Personally, I think python is the way to go for most simple programming tasks. It’s reasonably fast, pretty easy to learn, deals well with text, has some good biology-oriented modules (e.g. Biopython), and has a growing user base (making it easy to find help). Python certainly isn’t the fastest, but the focus here is on writing quick one-offs to get a job done rather than making an ultra-fast production tool. General programming knowledge is probably sufficient for most tasks, and here are a few resources to get started:

To illustrate how simple a program can be, here’s a script to accomplish the CRISPRi (or any guide library) guide-gene mapping/counting tasks described above. It’s obsessively documented, to try to show how this can be quite straightforward. It is also written for ease of understanding by a beginner rather than speed or conciseness. Writing this took about 20 minutes.

 

X Close

More on the soon-to-be plateau of CRISPR/Cas9

Jacob Corn

This week’s post is short, since I’ll be traveling most of next week and needed the weekend to catch up on some data. Following up on last week’s...

READ MORE

This week’s post is short, since I’ll be traveling most of next week and needed the weekend to catch up on some data. Following up on last week’s post about the explosion of CRISPR/Cas9, I thought I’d use another RNAi parallel to illustrate what I mean by Cas9 genome engineering becoming commonplace.

Instead of searching anywhere in a paper, here’s what a by-year search for “RNA interference” in just the title of a paper looks like: big spike, then gradual slow decline as the technology becomes commonplace. It’s not that people started using RNAi any less often. In fact, I’m sure its use is only growing. But you don’t put a common technique in the title (even if it’s a linchpin) unless there’s very good reason to do so.

rnainterference_years

Breaking RNA interference titles down by month looks like this.

rnainterference_months

And here’s where we are for “Cas9” in the title of a paper, by month.

cas9title_months

Notice the similarity between the recent number of Cas9-title publications and when RNA interference started showing up in titles less often.  One of my goals at the IGI is to hasten the day when genome editing is something everyone just does.

X Close

The explosion of CRISPR/Cas9

Jacob Corn

It seems like every time you turn around, there’s another paper that mentions CRISPR/Cas9. Just how fast has the field exploded? There’s nothing...

READ MORE

It seems like every time you turn around, there’s another paper that mentions CRISPR/Cas9. Just how fast has the field exploded? There’s nothing like data to find out! In the histogram below, each bar on the X-axis is a week (arbitrarily starting in late 2011) and the Y-axis is the number of papers in a Pubmed search for “CRISPR AND Cas9” (side note: one needs to include CRISPR in the search because “Cas9” spuriously includes results from a few labs that like to call caspases “Cas”, e.g. caspase 9 = Cas9, caspase 3 = Cas3).

crisprcas9-per-week-1024x767

Your hunch was correct — there IS a new paper mentioning CRISPR/Cas9 every time you turn around: several per day, in fact.

And I think (hope) that means we’re reaching a tipping point. It’s currently in vogue to sprinkle “Cas9” throughout a paper, even if the system is used as tool for biology and not itself the focus of the work, because it gets the attention of editors. But as more and more groups use Cas9, gene editing will become commonplace. Routine, even. And that’s exactly what should happen! Can you imagine a paper these days that crows about how they used gel electrophoresis to separate proteins, or PCR and restriction enzymes to clone a gene? It’s the natural course of things for disruptive technologies to quickly become just another part of the toolbox. RNAi for example, which completely upended biology not too long ago and is now used routinely and without fuss by most cell biology labs. Compare the plot above with the equivalent for RNAi (the search here was for “RNA interference” due to complications in searching “RNAi”, since that also finds other hits on a viral RNA called “RNA-one”. Also note that there are some false positives before Andy Fire’s 1998 Nature paper).

rnainterference-per-week

I’m looking forward to the day that CRISPR/Cas9 becomes as common-place as RNAi, since it will mean we’ve arrived at a new another era of biology. Want to know what that conserved genetic element does? Just remove it. Want to find out what that conserved residue does in your favorite protein? Mutate it in your organism of interest. No big deal!

That’s going to be an incredible time. I’m in this for the destination, not the vehicle.


For those interested, here’s how to generate the histogram
(download medline format records for a pubmed search)

# medlinedates.py
#!/usr/bin/env python
from Bio import Medline # requires Biopython
import datetime
import sys
fin = sys.argv[1]
with open(fin) as p:
    records = Medline.parse(p)
    for record in records:
        d = record['DA']
        d = datetime.date(int(d[:4]), int(d[4:6]), int(d[6:8]))
        print d.toordinal()

—-
(on the command line)
$ medlinedates.py [medline format file you downloaded] > dates.txt
—-
(in R)
# plotdates.R
d<-read.table("dates.txt", header=F)
ranges<-append(c(seq(min(d$V1), max(d$V1), by = 7)), max(d$V1))
hist(d$V1, breaks=ranges, freq=TRUE, col="blue")

X Close

What comes after Precision Medicine?

Jacob Corn

Lately I’ve been thinking quite a bit about the definition of Precision Medicine. This was brought about by the CIAPM call for proposals and two associated...

READ MORE

Lately I’ve been thinking quite a bit about the definition of Precision Medicine. This was brought about by the CIAPM call for proposals and two associated workshops. The workshops have been stellar, with a feeling more of collaboration than competition. Over the two days, I got to thinking about what Precision Medicine seems to mean right now versus what it might mean in the future.

Wikipedia has an interesting contrast between Precision Medicine and Personalized Medicine. But both PMs are defined as not necessarily implying treatments that are customized for an individual (or even a subset of patients). Instead, they are focused on using large data sets (genomics, proteomics, other omics, health records, etc etc) to determine how some existing medicine should or should not be delivered to a patient.

Let’s say you have a new drug that targets a particularly nasty form of cancer. PMs are currently focused on deciding how you’re going to administer that drug – who will get it and who won’t. That’s very important, all the way from clinical trials into general use. Trials may read out falsely negative if people who have no hope of benefit are included in the trial, and since every drug has side effects it’s best to pair a treatment with those for whom that risk:reward is favorable. But PMs are not about doing diagnostics on an individual patient and then custom-designing a new therapy for that patient.

This has all been weighing on my mind because of the two worlds in which I’ve worked. While I was in biopharma, we talked about Precision Medicine much like the above paragraph. “Precision” meant tens of thousands of people included, but excluding millions. Keith Yamamoto and Atul Butte (link probably obsolete soon, as Atul is now at UCSF) phrase this very nicely in terms of advancing human health by having the courage to tell people “No.”

Now I work in a field in which people in my lab routinely design reagents capable of specifically targeting one gene and changing a single base. If even one person has a mutation that causes a disease, we could theoretically make a reagent to change that mutation within a week or so (in the lab!). We’re not doing that in the clinic, but I think the writing is on the wall, and things might be different in a decade or so.

Widespread treatment-for-1 would be a major challenge, since it differs from normal medicine (and even Precision Medicine) in so many ways. How do we pay for such a thing? Will it be covered by insurance? What’s the incentive for a drug company to make a therapy for only one customer? Or is the technology itself the product and the therapy just one small example? How should we regulate it? How do we know if an intervention is working? How do we know if it’s safe? The N-of-1 trial might be the only thing possible, because there may be only one person with this particular mutation. 

As sequencing gets cheaper and faster, we’ll quickly accumulate massive piles of data. That’s currently a very centralized model, in which reams of data flow in to big centers and high-level rules for the application of treatment flow outwards. Turning the clock further forward, a decentralized future is also possible, in which hospital bedside sequencing informs programmable therapies that can be created in the very same hospital. Near-future Precision Medicine will save many lives, but a one-off-treatment system could fill in the corners to spell the end of orphan diseases. Science fiction at the moment, and much needs to happen to bring it about (not just biology, but engineering, regulation, and so on), but at least now we can see a path through the woods.

X Close

Our focus on the future present

Jacob Corn

It’s been a rather wild ride in the last month, which hasn’t left much time for blog posts. But I’m planning to  turn over a new leaf...

READ MORE

It’s been a rather wild ride in the last month, which hasn’t left much time for blog posts. But I’m planning to  turn over a new leaf and start posting at least something short at the beginning of every week.

This week’s post addresses a question that I’ve been asked in many ways by many people: what about germline editing? After the IGI started the ball rolling with a small meeting in Napa, we penned a call for a temporary moratorium on germline editing and have been lobbying for a larger summit, which is now slated for October. I think it likely that restriction or proscription of germline editing will be the outcome.

At this time, the IGI Lab will not do research on human germline editing for several reasons, including:

1. The IGI Lab is focusing on diseases for which somatic (non-heritable) editing would be a transformative advance. The media loves to talk about designer babies, but we actually don’t know the first thing about the genetic basis behind complex traits like beauty or intelligence. But we do know a lot about genetic disease, particularly so-called monogenic disorders, in which a problem in a single gene causes the disease. Online Mendelian Inheritance in Man currently contains about 3,500 disorders that have a clinical phenotype for which the molecular basis is known. It’s clear that we should start with one of these, such as sickle cell disease, cystic fibrosis, muscular dystrophy, or Huntington’s disease. The thing is, curing most genetic diseases wouldn’t require germline editing. Almost any hematopoietic disease could be cured non-heritably by taking a patient’s bone marrow, performing gene correction, and then re-implanting the edited bone marrow. By now we’re very good at bone marrow transplants. And once delivery systems are ironed out, even non-hematopoietic diseases could be cured in adults with gene correction therapy. But eventually achieving the above will take a lot of work. At the IGI Lab, we’re focusing on that future transformation of genetic disease from something we treat with pallative care to something we cure.

2. Cas9 technology is currently too nascent for me to consider germline editing wise. Gene correction is still a relatively new field, with few clinical successes (or even attempts) to refer to.  And compared to other gene editing technologies, such as ZFNs or TALENs, Cas9 is the new kid on the block. There are just so many questions still outstanding about the technology, as evidenced by the huge surge of papers from all over the world that do nothing but figure out new things about Cas9: how does it find targets?, what do off-target sequences even look like?, what happens between cutting and the appearance of edits? At the IGI we spend a lot of time using Cas9 to do gene editing in somatic cells, and we’ve gotten very good at it (more on that when the papers come out). But sometimes we get surprised by the outcomes. That makes me nervous enough for somatic editing, and we obsessively characterize individual reagents for our clinical projects. But the Rumsfeldian Known Unknowns and Unknown Unknowns are too great in relation to a heritable change in someone’s genome. When moving to the clinic, one should prefer a boring tech over one that’s exciting and new but poorly understood, and if no boring tech exists then keep working. In the balance of impact vs risk, a person’s life rests in one pan. 

 

 

X Close

Three timescales of impact for next-gen genome editing

Jacob Corn

This post expands on a slide that I often present in seminars: what is the scale (in time and impact) of next generation genome editing? I’m not restricting...

READ MORE

This post expands on a slide that I often present in seminars: what is the scale (in time and impact) of next generation genome editing? I’m not restricting this to CRISPR/Cas9, because the field is moving so fast that it’s anyone’s guess whether we’ll soon see a next next-gen (Cas10?). But the accelerator has been pressed firmly to the floor, and there’s no going back. To avoid overuse of speculative words like “might” and “could”, I’ll just speak as if I have a crystal ball. But futurism is often a fallacy and the genome editing field is only 2 years old and moving very quickly so consider what’s below a sketch at best and random guessing at worst.

Edit: Here I’m focusing on just a few areas out of many. There are very exciting things on the horizon for editing of crops and livestock, synthetic biology in normally difficult systems, and much more. I’m leaving all of that aside for now as fodder for another post.

Short: In the next few years I think we’ll see greater adoption of genome editing in many labs, both academic and industrial. This will mostly be what I call “RNAi v2.0” — disruption of  genes in a very fast and easy mode (either via CRISPRcutting or CRISPRinhibition). This will extend to both human cells and model organisms, but the scope accessible for reverse genetics will be greatly expanded. Now that more and more genomes are sequenced, we’ll finally have a way to figure out what biologies underly all of those great annotations in those organisms (reverse genetics) or screen for which genes are responsible for incredible phenotypes (forward genetics). How do salamanders regenerate limbs? How do some fungi turn insects into zombies? What are the roles of genes expressed during Plasmodium infection? Does ablating gene X slow tumor progression in this model system? Are all of these genes really necessary for epithelial differentiation in the gut? These kinds of questions will be broadly answerable in both academic and industrial research settings: fundamental discoveries that will accelerate and broaden our understanding of the world around us.

Medium: Within five years true gene editing (surgically replacing one sequence with a defined replacement) will have matured and be as easy in human cells and model organisms as plasmid mutation currently is bacteria. We’re already starting to see some hints of this on the horizon, so maybe this should even be in the “short” bin. But I think a lot of current work is focused on very low hanging fruit (important though it is), and there’s still no clear path towards quickly and robustly engineering silent or deleterious variants, for example mutants with a fitness disadvantage. So this one goes into “medium term”. Surgical introduction of mutation would be huge for any number of basic biologies, since it would enable one to readily ask reductionist and mechanistic questions in the context of a living cell or organism without confounding factors. On the translational front, in the medium term gene editing will totally change the way preclinical research is carried out. Custom-designed safety models (e.g. humanized rats), highly engineered cell lines to meld target and phenotypic screening, synthetic biology for enhanced drug production, and so on. People have been wanting to do these things for a long while and they might take a little longer to achieve in industry only because the focus will include robustness of the systems rather than purely speed, but they’re coming. More relevant to the general public, in the medium term we’ll start to see the widespread clinical emergence of ex vivo therapies that take advantage of gene editing, especially in the hematopoietic system. Clinical research and trials are already ongoing here (e.g. Sangamo’s work with ZFN knockout of CCR5 for HIV), but now I’m talking about FDA approval and widespread use of an edited product as a therapeutic. The trial data has so far been very impressive on many fronts, but time will tell and the finish line is always further away than you think. 

Long:  Since the likelihood of anyone accurately predicting at this timescale is quite low, rather than make any specific predictions I’ll instead wax philosophic. Here we’re starting to talk about disruptive science fiction entering our lives in a real way. Things like in vivo editing in adult or postmitotic tissues. Sci-fi may actually be an apt comparison and offers a few positive examples of successful prognostication: Edward Bellamy predicted credit cards in 1888 and Arthur C. Clarke described communications satellites in 1945. And in a way, media of all kinds has been preparing us for genome editing for decades. I was recently asked how I explain what genome editing is and why it’s practically beneficial. But the thing is, I actually don’t need to do much explaining. I’ve talked about genome editing with taxi drivers, hair dressers, graphic designers, high school students, and Hollywood actresses. Everyone gets it right away. You don’t need to know a thing about Cas9 or mechanisms of DNA break repair to understand genome editing. Most people very quickly understand what genome editing is and they see how much good it could do. But everyone also sees how much harm might come if we’re reckless and how much care should be taken. So in the long term, our relationship with genetic diseases will fundamentally change. I’m not necessarily talking about germline editing, since one might have the same outcome with the ability to replace affected tissues with edited tissues. There is the opportunity for real and permanent cures for terrible diseases in which people currently just make do. That’s powerful stuff. But it’s a long road, and there’s a lot left to be done. 

X Close

Filters

Tweets

Contact Us

Questions and/or comments about Corn Lab and its activities may be addressed to:

JACOB.CORN@BIOL.ETHZ.CH

Share: