Blog / Making the Cut

Welcome to Lena

Jacob Corn

Welcome to Lena Kobel, who joins the lab as a Cell Line Engineer. Lena has a long history in genome engineering, with previous experience in Martin Jinek’s...

READ MORE

Welcome to Lena Kobel, who joins the lab as a Cell Line Engineer. Lena has a long history in genome engineering, with previous experience in Martin Jinek’s lab and at Caribou Biosciences. Lena will be working on precision cell models and screens to study the genetics of DNA damage and genome editing.

X Close

Bootstrapping a lab

Jacob Corn

Today I’m going to talk about setting up a lab from a 10,000 foot view. I got thinking about this because my social media feed was recently filled with people announcing...

READ MORE

Today I’m going to talk about setting up a lab from a 10,000 foot view. I got thinking about this because my social media feed was recently filled with people announcing acceptance of positions, and I know several people going through big moves (myself included). This post is about how to nail the big stuff so that you can build a productive team within a short amount of time. I won’t be speaking about the science – you know that better than nearly anyone.

I’ll mostly talk about an academic setting, but this can be translated to industry and academic groups in a variety of environments and sizes. Terminology and the degree of control you have might differ. Think of this as a framework on which to build your particular situation.

Start early

Several of the items in this blog entry are things that you should have been thinking about before you even got your position. You probably leveraged some of them in your proposals and interviews. How are things different now that you have the job? What should you really do?

Read a book

All PIs should read books on the high-level philosophy of starting a business. Labs are basically small businesses. You manage talent, you build a culture, you have a budget, in the case of an academic lab you need to raise capital, and so on. The market for books on starting a lab is relatively small and is mostly written by academics. As such, it’s stiflingly narrow-minded. But there are many books on startups in a diverse range of markets. Read them, translate them into your particular setting, throw away what doesn’t sound right, and keep what sounds good.

Decide what kind of a lab you’re going to be

You wrote a proposal and described your research to land this job. But what kind of lab do you want to run? By this I mean all aspects including size, makeup, culture, pressure level, hierarchy (or lack thereof), and much more. You might have talked to people about your thoughts on this, but revisit it very deeply before you do anything. Because this vision completely defines everything else. Only start the actual work of setting up a lab after you’ve really defined what you want your lab to be. What are you doing? What are people in your lab doing? How are you feeling? How are they feeling? What do other faculty think about your lab? What do students not in your lab think about your lab? It’s easy to emulate your postdoc advisor’s lab or just keep doing what you did in your previous environment, but doing so is only a good idea after an explicit decision. Putting yourself in a situation you don’t enjoy is not going to be fun for anyone.

Write your lab plan down, but don’t share it

You need a vision, but you also need to be able to roll with the punches. Write down your vision. Format and length doesn’t matter, so long as you record your thoughts in a way that you can interpret later. Then store this document and revisit it occasionally. But don’t explicitly share it with the lab. Because your vision is going to change and it might change in a big way. Instead of sharing the document, evidence the changes you want to see with actions. Your lab members don’t need an archaic document holding them back, and will respond better to demonstrations of why the new way is better. This advice is a bit different from other PIs who find that a detailed lab manual helps them with management, so your mileage may vary.

What talent do you need to be productive?

Every lab has different needs according to its science, infrastructure, and culture. Who do you need on your team to become the kind of lab you want to be? Be aggressive about talent, and balance hard and soft skills. A lab full of milquetoasts is rarely productive, but neither is a lab full of diva rock stars who can’t get along with one another. You need the rarest of breeds – high-performing people who love working with other high performing people. You want people whose excellence feeds off each other. And you need them in every role. You need your admin to be as on top of things as your best postdoc. And you need each person in your lab to buy into its unique culture. A brilliant team player in the wrong cultural fit won’t perform to their highest potential.

Talent is the most critically important aspect of a lab. No matter what any PI thinks about their own talents, they are nothing without the people in their lab. All of that is easy to say and hard to do. Finding great people takes a lot of time, and getting the best people in every position is rarely possible. Plan to spend a lot of time screening talent. Always have your eye out for good people, at poster sessions, at meetings, and over coffee. Go after people you want. Show them why your lab will be an inspiring place to do great work.

Obsessively plan out everything you will need

You thought you arranged for much of this when you negotiated your startup. But you probably planned the capital equipment and sketched everything else. But make a spreadsheet and start writing down absolutely everything you need. Pipettes. Tips. Microcentrifuges. Eppendorf tubes. A microwave. Lab pens. What brand do you want for each? What amount? What is the catalog number? Don’t go overboard on supplies when doing your initial purchasing. Buy just enough to get going and be prepared to buy more  very quickly. But take advantage of lab startup deals on equipment with big supply clearinghouses like VWR and Fisher. You can save a lot of money if you buy a critical mass of everyday essentials all at once. Especially if you do so late in the year and sales reps need to make end-of-the-year sales targets. Again, don’t buy over-buy everything all at once! You don’t want to end up sitting in a lab stuffed to the brim with things you thought you might need but are actually collecting dust.

Execute flexibly

You’ve spent a lot of time planning. How do you know that your plans are perfect? Don’t worry, they never are. Get started and revise along the way. Don’t let crazy situations and stress derail plans you made carefully in a calm state of mind. But if something isn’t working and is creating that stress then change it as soon as possible.

X Close

Shapers and Mechanists

Jacob Corn

There’s a series of cyberpunk short stories and a book written in the 1980s by Bruce Sterling called The Schismatrix. It centers around two major offshoots...

READ MORE

There’s a series of cyberpunk short stories and a book written in the 1980s by Bruce Sterling called The Schismatrix. It centers around two major offshoots of future humanity that have chosen very different paths for their lives. The Shapers have embraced biological engineering to the extreme, changing organisms and themselves to adapt to harsh new environments and to eradicate diseases. The Mechanists have embraced digital/mechanical culture, building autonomous drones to do their work and enhancing themselves with electronic gadgets.

We live in a Mechanist world.

Mechanists triumphant

Consider how quickly we’ve embraced technologies that could be summarized as “digital” or “mechanical”. Smartphones are widely penetrant. Pervasive computing (e.g. the Internet of Things) is growing by the day. Software is a massive industry, with some companies worth more than entire countries. Robotic assembly lines work night and day. Your Amazon orders are compiled for shipping by a fleet of self-guiding drone-shelves. 3D printing of plastics is used to rapidly prototype parts. There are popular movements to enhance human abilities with chemical “nootropics”. Even food has been reduced to its minimal essential parts via Soylent. And we are so willing to pay for it all that we demand yearly updates to our favorite products. Mechanists have risen to the top without us even realizing it.

Shapers on the fringes

So where are the Shapers? Strikingly, biologics are at a technical high point but a social low. Vaccines save millions of lives but are viewed with suspicion. Antibodies made in cellular factories cure cancer but are pilloried for being too expensive. Engineered plants can prevent blindness, feed the world, and make drugs to combat Ebola, but are viewed with distaste. We’re starting to gain control over genomes themselves, from editing-in-place to the creation of entire chromosomes. There’s hope on the horizon to cure genetic diseases that have plagued us for millennia, but newspapers are fixated on designer babies.

A culture in shift?

But is the pillory of Shaper values changing? Are biologics starting to rise? There are some signs, from the small to the large. Home-brewing exotic craft beer is more popular than ever. Glofish are cute novelty pets. Biohackers self-organize clubs to learn how to clone DNA in their kitchens in cute “bento” labs-in-a-box. Books describe paths to resurrect extinct species. Wildly popular TED talks encourage microbiome self-maintenance. And patients with genetic diseases are eager to try gene insertion and editing therapies that are on the horizon.

Clash or convergence?

The Schismatrix series envisions the Shaper-Mechanist dichotomy as an epic struggle. The two world views vie for dominance through violence, both physical and political. Bruce Sterling injected many stereotypes you might expect into the cold Mechanists and the bio-controlling Shapers. In the real world, I don’t see it going that way. I think people take what works for them, no matter what type of tech. Biological technology had a long head start on digital technology (think animal/plant domestication vs the Babbage difference engine). Digital technology started a rapid acceleration relatively quickly, possibly because it’s built on designed systems. This is in contrast to biological technology, which is currently built on systems we discover in the world around us and adapt to our uses. But as we build more tools and eke out greater understanding of biological mysteries, we’re starting to see a rapid acceleration of biological technology. Will we embrace it the way we embraced mechanical technology? Will we use both together? I think so, and I think it will come naturally as these technologies present solutions to pressing problems.

X Close

Backpacking season

Jacob Corn

It’s important to spend time outside the lab. And before you ask, that’s not why the blog has been dormant. I was teaching this last semester (a general biochemistry...

READ MORE

It’s important to spend time outside the lab. And before you ask, that’s not why the blog has been dormant. I was teaching this last semester (a general biochemistry lab), plus working with members of my lab to get several papers out the door. Stay tuned for more on that front. Next time I’ll go back to CRISPR blogging. But for now, let’s talk about staying out of ruts. 

The IGI/Berkeley isn’t my first group leader rodeo. I spent several years in biotech/pharma, and while there I learned an important lesson the hard way. It’s very important to spend time outside of the lab thinking about something other than science. This is not the advice you’ll hear from some mentors, especially when you’re starting out as a group leader. But it’s oh-so-important for the marathon of a career in science.

When you’re just getting started, you’ll spend a lot of time executing. Mentors often stress the importance of execution to the detriment of everything else. But there’s a line to draw, and it is entirely possible to spend too much time doing and thinking about the science that’s right in front of your face. Why?

Execution vs Inspiration

As scientists (and in many other professions), we aspire to do something important. This takes creative and surprising ideas – approaches or entire projects that creatively solve difficult problems. It’s very hard to have surprising ideas when you’re in execution mode. Personally, it’s even hard for me to have surprising ideas at scientific conferences. While scientific meetings are great places to hear about diverse science and even synthesize ideas, I rarely find myself coming up with anything radically new. I learn a ton at meetings, but mostly I’m putting facts in my brain. For me, creating a big idea from scratch takes backpacking.

When I go backpacking with my wife, we typically spend 10-15 hours per day hiking. She’s also a scientist, but working in a totally different field than myself (infectious disease epidemiology). 10-15 hours is a lot of time for us to talk about far-ranging ideas in each other’s fields, but also to chat about ideas other than science and to let our minds wander. For the first half day or so, I find that I’m still thinking about immediate problems in the back of my mind: how do I write the next section of this paper, or how do I help a postdoc design this next experiment. But after that, I stop trying to actively fix problems. Sometimes thinking about other things leaves my brain in a place I didn’t expect. Sometimes those are good ideas. I don’t always have good ideas while backpacking, but when I do they tend to be things I never would have thought of at the lab. By the end of my last backpacking trip, I had mentally outlined two R01 grants and realized out an interesting angle for a biotech NewCo I’m starting.

Other types of backpacking

Maybe your version of backpacking is a pottery class, or binge-watching Netflix, or scuba diving. But having something like that and making enough time for it is super important. Having new ideas lets you see ways around problems rather than through them. And perhaps more importantly, it will save your sanity. Execution mode is hard and stressful. Creative mode is fun and relaxing. To avoid burn-out, mix execution with creativity. Reserve significant time for fun on a regular basis. You’re going to be doing something your entire life. Hopefully your work is globally fun, but it won’t always be locally fun. Try to make your life globally fun. But if you’re a workaholic, rest assured that fun will make your work better. And it will hopefully make you a happier person as well.

 

 

 

X Close

Sequence replacement to cure sickle cell disease

Jacob Corn

My lab recently published a paper, together with outstanding co-corresponding authors David Martin (CHORI) and Dana Carroll (University of Utah), in which...

READ MORE

My lab recently published a paper, together with outstanding co-corresponding authors David Martin (CHORI) and Dana Carroll (University of Utah), in which we used CRISPR to reverse the causative allele for sickle cell disease in bone marrow stem cells. This work got some press, so unlike other papers from the lab I won’t use the blog to explain what we did and found. You can go else where for that. But I do want to explain the motivation behind the work, as well as why we chose this approach.

Sickle cell disease and gene editing

sicklecd750-600x400

Next-generation gene editing is already transforming the way scientists do research, but it also holds a great deal of promise for the cure of genetic diseases. One of the most most tractable genetic diseases for gene editing is sickle cell disease (SCD). The molecular basis has been known since 1949, so it’s relatively well understood. Its root cause is in bone marrow stem cells (aka hematopoietic stem cells, or “HSCs”), which are easy to get to with editing reagents. It’s monogenic and recessive, so you only need to reverse one disease allele for a cure. There’s no widely-used cure – though bone marrow transplantation from healthy donors can cure the disease, very few patients get the transplant for a variety of reasons (unlike severe combined immuno deficiency, another HSC disease in which most patients do get transplants). And we know from various sources that editing just 2-5% of alleles can provide benefit to patients (equates to 4-10% of cells, due to the Hardy-Weinberg principle).

But while several groups have tried to make a preclinical gene editing candidate to cure SCD by replacing the disease allele, most efforts have so far met with challenges. Things at first looked very promising in model cell lines, but when people moved to HSCs they found that the efficiency of allele conversion dropped substantially. Efficacious replacement of disease alleles is something of a holy grail for gene editing, but has so far lagged behind our ability to disrupt sequences.

Sequence knockout to cure SCD

In the field of SCD,  problems with sequence replacement have prompted efforts to find a way to use sequence disruption to ameliorate the disease. The most promising of these approaches lead to the up-regulation of fetal hemoglobin through a variety of mechanisms, including tissue-specific disruption of Bcl11A (it needs to be tissue-specific because Bcl11A does a lot of things in a lot of cells). Upregulation of fetal hemoglobin has been observed in humans (during Hereditary Persistence of Fetal Hemoglobin, or HPFH), and is protective against the sickling of adult hemoglobin.

Fetal hemoglobin upregulation strategies are very exciting, make for some fascinating basic research, and could eventually lead to new treatments for SCD patients. There are a few people in my lab working along these lines, and several other groups are doing incredible work in this area. But while the editing is easier, there are still some major challenges to clinical translation, mostly due to the new, mostly unexplored biology surrounding tissue-specific disruption of Bcl11A.

Sequence replacement to cure SCD

We decided to go back to the basics, and find out if our work on flap-annealing donors and the Cas9 RNP could bring the “boring” approach of sequence replacement back on the table. In this case, we want to be as boring as possible. I’d rather not get surprised by new biological discoveries about gene regulation while we’re working in patients.

autologous_transplantWe also wanted our approach to be as general as possible – to develop something that works for SCD but is accessible to clinical researchers everywhere. That would fit the democratization theme of Cas9. Both the nuclease targeting reagent and the approach to allele replacement would be fast, cheap, and easy for everyone to use, so that everyone could ask questions about their own system in HSCs and maybe even develop gene editing cures for the particular disease in which they’re working.
This is in contrast to some viral-based editing strategies that seem reasonably effective but are very slow to iterate and have a high barrier to entry. If possible, I’d rather develop something that any hematopoietic researcher or clinical hematologist can pick up and use, with rapid turn around from idea to implementation. That’s the way you get to clinician-driven cures for rare genetic diseases, which is the long-term promise of easily reprogrammable gene editing.

All of the above is why we used Cas9 RNP and coupled it with flap-annealing single stranded DNA donors. We found that we didn’t even need to use chemical protection of the guide RNA or single stranded DNA, though we certainly tried that. In short-term editing we found very high levels of editing, and in long-term mouse experiments we found edits almost five-fold higher than previously reported. I think this is certainly good enough for researchers to start using this method to tackle their own interesting biologies. But for us, there’s still a lot of work to be done before this becomes a cure for SCD.

Next steps towards the clinic

First, we need to try scaling up our editing reagents and method (Cas9, guide RNA, donors, stem cells, and electroporation) to clinical levels, and we also need to source them with clinical purity. This is a non-trivial step, but absolutely critical to eventually starting a clinical trial. Second, we need to establish the safety of our approach. We found one major off-target site while doing the editing, and it lies in a gene desert. But just because it’s not near anything that looks important doesn’t mean we don’t need to do functional safety studies! (see my previous blog post about safety for gene editing for more on this) We have a battery of safety studies planned in non-human models, and we want to take a close look to see whether we can take the next step into the clinic.

My collaborators and I are committed to the clinical application of gene editing to cure sickle cell disease, and we hope to start a clinical trial within the next five years. But the data will guide us – we want to be very careful, so that that we know that the cure is not worse than the disease. There are many moving parts involved in these translational steps and some of them are a little slow, so please stay tuned as we try to bring a breakthrough cure to patients with SCD.

X Close

Improved knockout with Cas9

Jacob Corn

Cas9 is usually pretty good at gene knockout. Except when it isn’t. Most people who have gotten their feet wet with gene editing have had an experience like that...

READ MORE

Cas9 is usually pretty good at gene knockout. Except when it isn’t. Most people who have gotten their feet wet with gene editing have had an experience like that in the following gel, in which some guides work very well but others are absolute dogs.

 
That’s a problem if you have targeting restrictions (e.g. when going after a functional domain instead of just making a randomly placed cut). So what can one do about it?
 

TL;DR Adding non-homologous single stranded DNA when using Cas9 RNP greatly boosts gene knockout.


 

The problem

There have been a few very nice papers showing that Cas9 prefers certain guides. I refer to these as the One True Guide hypothesis, with the idea being that Cas9 has somehow evolved to like some protospacers and dislike others. The data doesn’t lie, and there is indeed truth to this – Cas9 likes a G near the PAM and hates to use C. But guides that are highly active in one cell line are poor in others, and comparing very preference experiments in mouse cells vs worms gives very different answers. That’s not what you’d expect if the problem lies solely in Cas9’s ability to use a guide RNA to make a cut.
 
But of course, Cas9 is only making cuts. Everything else comes down to DNA repair by the host cell.
 

Our solution

In a new paper from my lab, just out in Nature Communications, we found that using a simple trick to mess with DNA repair can rescue totally inactive guides and make it easy to isolate knockout clones, even in challenging (e.g. polyploid) contexts. We call this approach “NOE”, for Non-homologous Oligonucleotide Enhancement.
(The acronym is actually a bit of a private joke for me, since I used to work with NOEs in a very different context, and Noe Valley is a nice little neighborhood in San Francisco)
 
How does one perform NOE? It’s actually super simple. When using Cas9 RNPs for editing, just add non-homologous single stranded DNA to your electroporation reaction. That’s it. This increases indel frequencies several fold in a wide variety of cell lines and makes it easy to find homozygous knockouts even when using guides that normally perform poorly.
 
The key to NOE is having extra DNA ends. Single stranded DNA works the best, and even homologous ssDNAs one might use for HDR work. We tend to use ssDNAs that are not homologous to the human genome (e.g. a bit of sequence from BFP) because they make editing outcomes much simpler (NHEJ only instead of NHEJ + HDR). But double stranded DNAs also work, and even sheared salmon sperm DNA does the trick! Plasmids are no good, since there are no free ends.
 
We know that NOE is doing something to DNA repair, because while this works in many cell lines, the molecular outcomes differ between cells! In many cells (5/7 that we’ve tested), NOE causes the appearance of very large deletions (much larger than you would normally see when using Cas9). But in 2/7 cells tested, NOE instead caused the cells to start scavenging little pieces of double stranded DNA and dropping them into the Cas9 break! The junctions of these pieces of DNA look like microhomologies, but we haven’t yet done the genetic experiments to say that this is caused by a process such as microhomology mediated end joining.
 

What’s going on here?

How can making alterations in DNA repair so drastically impact the apparent efficacy of a given guide? We think that our data, together with data from other labs, implies that Cas9 cuts are frequently perfectly repaired. But this introduces a futile cycle, in which Cas9 re-binds and re-cuts that same site. The only way we observe editing is when this cycle is exited through imperfect repair, resulting in an indel. Perfect repair makes a lot of sense for normal DNA processing, since we accumulate DNA damage all the time in our normal lives. We’d be in a sorry state indeed if this damage frequently resulted in indels. It seems that NOE either inhibits perfect repair (e.g. titrating out Ku?) or enhances imperfect repair (e.g. stimulating an ATM response?), though we are still lacking direct data on mechanism at the moment.
cas9cycle
 

What is it good for?

The ability to stimulate incorporation of double stranded DNA into a break might be useful, since non-homologous or microhomology-mediated integration of double stranded cassettes has recently been used for gene tagging. But we haven’t explicitly tried this. We have also found NOE to be very useful for arrayed screening, in which efficiency of the edit is key to phenotypic penetrance and subsequent hit calling.
 
Importantly, NOE seems to work in primary cells, including hematopoietic stem cells and T cells. We’ve been using it when doing pooled edits in unculturable primary human cells, and find that far higher fractions of cells have gene disruptions when using NOE. We’ve so far only worked in human cells with RNP, and I’m very interested to hear peoples’ experience using NOE in other organisms. We haven’t had much luck trying it with plasmid-based expression of Cas9, but other groups have told me that they can get it to work in that context as well. 
 

How do I try it?

So if you’re interested, give it a shot. The details are all in our recent Nature Communications paper, but feel free to reach out if you have any more questions. This work was done by Chris Richardson (the postdoc who brought you flap-annealing HDR donors), Jordan Ray (an outstanding undergrad who is now a grad student at MIT), and Nick Bray (a postdoc bioinformatics guru).

X Close

Safety for CRISPR

Jacob Corn

This post is all about establishing safety for CRISPR gene editing cures for human disease. Note that I did not say this post is about gene editing off-targets....

READ MORE

This post is all about establishing safety for CRISPR gene editing cures for human disease. Note that I did not say this post is about gene editing off-targets. We’ll get there, but you might be surprised by what I have to say.

Contrary to what some might say or write, most of us gene editors do not have our heads in the sand when it comes to safety. From a pre-clinical, discovery research point of view, the safety of a given gene editing technology is relatively meaningless. There are many dirty small molecules out there that you’d never want to put in a person but are ridiculously useful to help unravel new biology. Given that pre-clinical researchers have the luxury of doing things like complementation/re-expression experiments and isolating multiple independent clones, let’s dissociate everyone’s personal research experience with CRISPR (all pre-clinical at this point) from questions of safety. Those experiences are useful and informative, but so far anecdotal and not necessarily tightly linked to the clinic.

Despite what we gene editors like to think, CRISPR safety is not a completely brand new world full of unexplored territory. While there are some important unanswered questions, there’s a lot of precedent. Not only are other gene editing technologies already in the clinic, but non-specific DNA damaging agents are actually effective chemotherapies (e.g. cisplatin, temozolomide, etoposide). In the latter case, the messiness of the DNA damage is the whole point of the therapy.

Here are a few thoughts about the safety of a theoretical CRISPR gene editing therapy, in mostly random order. I’ll preface this by saying that, while I have experience in the drug industry, I’m by no means an expert on clinical safety and defer to the real wizards.

Safety is about risk vs reward

Let’s start with a big point: as with any disease, the safety of a gene editing therapy is all about the indication. The safety profile of a treatment for a glioblastoma (very few good treatment options for a fatal disease with fast progression) will be very different than a treatment for eczema. And the safety tolerance of a glioblastoma treatment that increases progression-free survival by only two days is going to look different than one that increases overall survival by five years. So there won’t be One True Rule for gene editing safety, since most of the equation will be written by the disease rather than the therapy.

Gene editing safety is about functional genotoxicity

While most of the safety equation is about the disease, the treatment itself of course needs to be taken into account. At heart, gene editing reagents are DNA damaging agents, and so genotoxicity is a big concern. Does the intervention itself disrupt a tumor suppressor and lead to cancer? Does it break a key metabolic enzyme and lead to cell death? As mentioned above, there are plenty of DNA damaging agents that wreak havoc in the genome, but are tolerated because due to risk/reward and lack of a better alternative (I’m especially looking at you, temozolomide). The key to this point is function of what gets disrupted.

With CRISPR, we have the marked advantage that guide RNAs tend to hit certain places within the genome. We know how to design the on-target and are still figuring out how to predict and measure the off-target. But even with perfect methods for off-targets, we’d still need to do the functional test. Consider a “traditional” therapeutic (small molecule or biologic) – while an in vitro off-target panel based on biochemistry is valuable, it’s no substitute at all for normal-vs-tumor kill curves (as an example). And even those kill curves are no substitute for animal models. The long term, functional safety profile of a gene editing reagent is the key question, and with CRISPR I’d argue that we’re still too early in the game to know what to expect. The good news is that ZFNs so far seem pretty good, giving me a lot of hope for gene editing as a class.

Gene editing safety is NOT about lists of sequences

You’d think that determining an exhaustive list of off-target sequences would be a critical part of any CRISPR safety profile. But in the example above, contrasting in vitro biochemical assays with organismal models, I consider lists of off-targets to be equivalent to the biochemical assay. I’m going to be deliberately controversial for a moment and posit that, for a therapeutic candidate, you shouldn’t put much weight on its list of off-target sites. As stated above, you should instead care about what those off-targets are doing, and for that you might not even need to know where the off-targets are located.

When choosing candidate therapeutics in a pre-clinical mode, lists of off-target sequences can be very useful in order to prioritize reagents. If one guide RNA hits two off-target sites and another hits two hundred, you’d probably choose the former rather than the latter. But what if one the two off-targets is p53? What if the two-hundred are all intergenic? Given the fitness advantage to oncogenic mutations, the math involved in using sequencing (even capture-based technologies) to detect very rare off-target sites is daunting. Being able to detect a 1 in a million sequence-based event sounds incredible, but what if you need to edit as many as 20 million cells for a bone marrow transplant? That’s twenty cells you might be turning cancerous but never even know it. Now we come right back around to function – you should care much more about the functional effect of your gene edit rather than a list of sequences. That list of sequences is nice for orders-of-magnitude and useful to choose candidate reagents, but it’s no substitute for function. 

Is gene editing safety about immunogenicity?

There are two big questions around gene editing immunogenicity: the immunogenicity of the reagent itself, and on-target immunogenicity if the edit introduces a sequence that’s novel to the patient. What happens when the reagent itself induces a long-term immune response? For therapies that require repeat dosing, this can kill a program (hence a huge amount of work put into humanizing antibodies). A therapy that causes someone to get very sick on the second dose is not much good, nor is it useful if antibodies raised to the therapy end up blocking the treatment. But what about in situ gene editing?

Most of in situ gene editing reagents are synthetic or bacterial and so one might raise antibodies against them, but the therapy itself is (ideally), one-shot-to-cure. In that case, as long as there’s not a strong naive immune response, maybe it doesn’t matter if you develop antibodies to the editing reagent? There are few answers here for CRISPR, and most work with ZFNs has been with ex vivo edits, where the immune system isn’t exposed to the editing reagent. Time will tell if this is a problem, and animal models will be key. Even more subtle, what happens when a gene edit causes re-expression of a “normal” protein that a patient has never before expressed (e.g. editing the sickle codon to turn mutant hemoglobin into wild type hemoglobin)?

The potential for a new immune response against a new “self” protein is probably related to the extent of the change – a single amino acid change (e.g. sickle cell) is probably less likely to cause problems than introducing a transgene (e.g. Sangamo’s work inserting enzymes into the albumin locus for lysosomal storage disorders and hemophilia). But once again, I’ve heard a lot of questions and worry about this problem but very few answers. In vivo experiments are desperately needed, and the closer to a human immune system the better.

Moving forward based on the predictive data

As you’ve probably gathered by now, I have a healthy respect for functional characterization when it comes to safety. That’s why it’s absolutely critical that we keep moving forward and not let theoretical worries about arbitrary numbers of off-targets stifle innovation without data. These are tools that could some day help patients in desperate need and with few other options, so let the truly predictive functional data rule the day.

X Close

CAR-Ts and first-in-human CRISPR

Jacob Corn

(This post has been sitting in my outbox for a bit thanks to some exciting developments in the lab, so excuse any “dated” references that are off...

READ MORE

(This post has been sitting in my outbox for a bit thanks to some exciting developments in the lab, so excuse any “dated” references that are off by a week or two)

My news feeds have been alight with the news of Carl June’s (U Penn) recent success at the RAC (Recombinant DNA Advisory Committee) in proposing to use CRISPR to make better CAR-Ts. Specifically, they’re planning to delete TCR and PD-1 in NY-ESO-1 CAR-Ts, melding checkpoint immunotherapy with targeted cell killing. Back in April 2015 I predicted a five year horizon for approval of ex vivo therapies based on a gene edited cell product, and this green light, leading to a Phase I trial, could be consistent with that timeframe. Crazy to think…    
CAR-Ts are a clear place where gene editing can make an important difference in the short term, and companies are already hard at work in this arena – Novartis has a collaboration with Intellia and Juno is working with Editas. In this context, the work done by June’s team is incredibly important and paves the way for therapeutic cell products made with gene editing.
But most news headlines declare this “the first use of CRISPR in humans”. While technically true, in that gene editing is being used to make a cell product, this description doesn’t really sound right to me. The headline is that we’re about to start doing gene editing in humans, with the implication that genetic diseases are in the crosshairs.
But CAR-Ts of course target cancer, and are not permanent additions to a patient’s body. You can stop taking CAR-Ts in a way that is difficult or impossible with other genetically modified cell products (e.g. edited iPSCs for regenerative medicine) or gene editing therapeutics (e.g. in situ reversal of a disease allele). And there are other ways to tackle some of these cancers (though none nearly so effective). We’ll learn a lot from these next-gen CAR-Ts and I have no doubt that they’ll impact patients, but will the main learnings be specific to CAR-Ts and oncology? Is this an important step in establishing safety or efficacy for CRISPR as a way to cure genetic diseases?
We know that primary T cells are actually surprisingly easy to edit, that editing doesn’t cause T cells to keel over, and Sangamo paved the way with ZFN-modified T cells for HIV some time ago. But each Cas-based reagent potentially has unique advantages and liabilities (especially genotoxic) and nothing has yet been used at scale. Perhaps we’ll learn something generalizable about using these proteins to make therapeutic products, and maybe set the stage for functional safety assays? Maybe a main benefit is to get the public comfortable with the idea of gene editing to combat disease?
Cancer immunotherapy is certainly a powerful meme at the moment and tapping in to that good feeling could raise awareness and public perception of gene editing. I’m looking forward to results from June’s work with great excitement, whether it’s aimed at genetic disease or not.

X Close

CRISPR Challenges – Imaging

Jacob Corn

This post is the first in a new, ongoing series: what are big challenges for CRISPR-based technologies, what progress have we made so far, and what might we look...

READ MORE

This post is the first in a new, ongoing series: what are big challenges for CRISPR-based technologies, what progress have we made so far, and what might we look forward to in the near future? I’ll keep posting in this series on an irregular basis, so stay tuned for your favorite topic. These posts aren’t meant to belittle any of the amazing advances made so far in these various sub-fields, but to look ahead to all the good things on the horizon. I’m certain these issues are front and center in the minds of people working in these fields, and this series of posts is aimed to bring casual readers up to speed with what’s going to be hot.

First up is CRISPR imaging, in which Cas proteins are used to visualize some cellular component in either fixed or live cells. This is a hugely exciting area. 3C/4C/Hi-C/XYZ-C technologies give great insight into the proximity of two loci averaged over large numbers of cells at a given time point. But what happens in each individual cell? Or in real time? We already know that location matters, but we’re just scratching the surface on what, when, how, or why.

CRISPR imaging got started when Stanley Qi and Bo Huang fused GFP to catalytically inactivated dCas9 to look at telomeres in living cells. Since then, we’ve seen similar approaches (fluorescent proteins or dyes brought to a region through Cas9) and a lot of creativity used to multiplex up to three colors. There’s a lot more out there, but I want to focus on the future…

What’s the major challenge for live cell CRISPR imaging in the near future?

Sensitivity

Most CRISPR imaging techniques have trouble with signal to noise. It is so far not possible to see a fluorescent Cas9 binding a single copy locus when there are so many Cas9 molecules floating around the nucleus.  So far imaging has side-stepped signal to noise by either targeting repeat sequences (putting multiple fluorescent Cas9s in one spot) or recruiting multiple fluorophores to one Cas9. Even then, most CRISPR imaging systems rely on leaky expression from uninduced inducible promoters to keep Cas9 copy number on par with even repetitive loci.  Single molecule imaging of Halo-Cas9 has been done in live cells, but again only at repeats. Even fixed cell imaging has trouble with non-repetitive loci. Sensitivity is also a problem for RCas9 imaging – this innovation allowed researchers to use Cas9 directed to specific RNAs to follow transcripts in living cells. But it was mostly explored with highly expressed (e.g. GAPDH) or highly concentrated (e.g. stress granule) RNAs. How can we track a single copy locus, or ideally multiple loci simultaneously, to see how nuclear organization changes over time?

Someone’s going to crack the sensitivity problem, allowing people to watch genomic loci in living cells in real time. Will we learn how intergenic variants alter nuclear organization to induce disease? Will we see noncoding RNAs interacting with target mRNAs during development? With applications this big, I know many people are working on the problem and I’m sure there will be some big developments soon.

X Close

Ideas for better pre-prints

Benjamin Gowen

A few weeks ago, Jacob wrote a blog post about his recent experience with posting pre-prints to bioRxiv. His verdict? “…preprints are still an experiment rather...

READ MORE

A few weeks ago, Jacob wrote a blog post about his recent experience with posting pre-prints to bioRxiv. His verdict? “…preprints are still an experiment rather than a resounding success.” That sounds about right to me. I’m bearish on pre-prints right now because the very word implies that the “real” product will be the one that eventually appears “in print”. Don’t get me wrong–I think posting pre-prints is a great step toward more openness in biology, and I applaud the people who post their research to pre-print servers. Pre-prints are also a nice work-around to the increasingly long time between a manuscript’s submission and its final acceptance in traditional journals; posting a pre-print allows important results to be shared more quickly. There’s a lot of room for improvement, though. With some changes, I think pre-print servers could better encourage a real conversation between a manuscript’s authors and readers. Here are some of my thoughts on how they might achieve that. I know there are several flavors of pre-print servers out there, but for this post I’m going to use bioRxiv for my examples.

 

Improve readability

It’s 2016, we’ve got undergraduates doing gene editing, but most scientific publications are still optimized for reading on an 8.5×11” piece of paper. Pre-prints tend to be even less readable–figures at the end of the document, with legends on a separate page. The format discourages casual browsing of pre-prints, and it ensures the pre-print will be ignored as soon as a nicely typeset version is available elsewhere. I will buy a nice dinner for anyone who can make pre-prints display like a published article viewed with eLife Lens.

 

Better editability

bioRxiv allows revised articles to be posted prior to publication in a journal, but I would like a format that makes it really easy for authors to improve their articles. Wikipedia is a great model for how this could work. On Wikipedia, the talk page allows readers and authors to discuss ways to improve an article. The history of edits to a page shows how an article evolves over time and can give authors credit for addressing issues raised by their peers. Maintaining good version history prevents authors from posting shoddy work, fixing it later, and claiming priority based on when the original, incomplete version of the article was posted.

 

Crowd-source peer review

Anyone filling in a reCAPTCHA to prove they’re not a robot could be helping improve Google Maps or digitize a book. What if pre-print servers asked users questions aimed at improving an article? Is this figure well-labeled? Does this experiment have all of the necessary controls? What statistical test is appropriate for this experiment? With data from many readers about very specific pieces of an article, authors could see a list of what their audience wants. It looks like we need to repeat the experiments in Figure 2 with additional controls. Everybody likes the experiments in Figure 3, but they hate the way the data are presented.

 

Become the version of record

Okay, this one’s a definitely a stretch goal. Right now pre-prints get superseded by the “print” version of the article, but that doesn’t need to be the case. Let’s imagine a rosy future in which articles on bioRxiv are kept completely up-to-date. Articles are typeset through Lens, making them more readable than a journal’s PDF. There’s a thriving “talk” page where readers can post comments or criticisms. Maybe the authors do a new experiment to address readers’ comments, and it’s far easier to update the bioRxiv article than to change the journal version. At that point, bioRxiv would become the best place to browse the latest research or make a deep dive into the literature. Traditional journals could still post their own versions of articles, provided they properly cite the original work, of course.

X Close

Filters

Tweets

March 12, 2020 0 Comments

Welcome to Lena

Welcome to Lena Kobel, who joins the lab as a Cell Line Engineer. Lena has a long history in genome engineering, with previous experience in Martin Jinek’s...

October 16, 2018 2 Comments

Bootstrapping a lab

Today I’m going to talk about setting up a lab from a 10,000 foot view. I got thinking about this because my social media feed was recently filled with people announcing...

June 12, 2017 1 Comment

Shapers and Mechanists

There’s a series of cyberpunk short stories and a book written in the 1980s by Bruce Sterling called The Schismatrix. It centers around two major offshoots...

June 1, 2017 1 Comment

Backpacking season

It’s important to spend time outside the lab. And before you ask, that’s not why the blog has been dormant. I was teaching this last semester (a general biochemistry...

November 9, 2016 0 Comments

Sequence replacement to cure sickle cell disease

My lab recently published a paper, together with outstanding co-corresponding authors David Martin (CHORI) and Dana Carroll (University of Utah), in which...

September 12, 2016 1 Comment

Improved knockout with Cas9

Cas9 is usually pretty good at gene knockout. Except when it isn’t. Most people who have gotten their feet wet with gene editing have had an experience like that...

August 29, 2016 0 Comments

Safety for CRISPR

This post is all about establishing safety for CRISPR gene editing cures for human disease. Note that I did not say this post is about gene editing off-targets....

July 5, 2016 0 Comments

CAR-Ts and first-in-human CRISPR

(This post has been sitting in my outbox for a bit thanks to some exciting developments in the lab, so excuse any “dated” references that are off...

May 25, 2016 0 Comments

CRISPR Challenges – Imaging

This post is the first in a new, ongoing series: what are big challenges for CRISPR-based technologies, what progress have we made so far, and what might we look...

May 17, 2016 0 Comments

Ideas for better pre-prints

A few weeks ago, Jacob wrote a blog post about his recent experience with posting pre-prints to bioRxiv. His verdict? “…preprints are still an experiment rather...

Contact Us

Questions and/or comments about Corn Lab and its activities may be addressed to:

JACOB.CORN@BIOL.ETHZ.CH

Share: