Cell squeezing enhances protein imaging

MIT-Protein-Labels_0

Tagging proteins with a fluorescent label such as green fluorescent protein (GFP) is currently the best way to track specific molecules inside a living cell. However, while this approach has yielded many significant discoveries, GFP and similar tags are so large that they may interfere with the labeled proteins’ natural functions.

A new approach based on cell-squeezing technology developed at MIT allows researchers to deliver fluorescent tags that are much less bulky, making this kind of protein imaging easier and more efficient.

In 2013, the MIT team demonstrated that squeezing cells makes it possible to deliver a variety of molecules, including proteins, DNA, carbon nanotubes, and quantum dots, into the cells without damaging them.

Researchers at Goethe University Frankfurt in Germany, working with their MIT colleagues, have now employed this approach to deliver relatively tiny fluorescent tags that can be targeted to specific proteins. Using regular confocal microscopes or super resolution microscopes, scientists can then track these proteins over time as they perform their normal functions.

“It really opens up the door to watching protein interactions in live cells,” says Armon Sharei, a former postdoc at MIT’s Koch Institute for Integrative Cancer Research. “Proteins are the building blocks of cells and control all their functions, so it’s exciting to be able to finally visualize them in a living cell, without genetic modifications.”

Sharei is an author of a paper describing the technique in Nature Communications. The paper’s lead author is Alina Kollmannsperger, a graduate student at Goethe University, and the senior authors are Ralph Wieneke and Robert Tampé of Goethe University. Robert Langer, the David H. Koch Institute Professor at MIT, and Klavs Jensen, the Warren K. Lewis Professor of Chemical Engineering at MIT, are also authors.

“We are very excited about this latest application for our cell squeezing approach and its implications for protein labeling,” says Langer, who is a member of MIT’s Koch Institute for Integrative Cancer Research.

Rapid delivery

In their 2013 study, the MIT team showed that squeezing cells through a constriction 30 to 80 percent smaller than the cells’ diameter caused tiny, temporary holes to appear in the cell membranes, allowing any large molecules in the surrounding fluid to enter. The holes reseal quickly and the cells suffer no long-term damage.

The researchers then began working with the Goethe University team to use this technique to label proteins with small fluorescent tags, which have previously been difficult to get into living cells. The Goethe team developed a tag called trisNTA that binds to any protein with a long string of histidine molecules (one of the 20 amino acids that form the building blocks of proteins).

For this study, the researchers first used genetic engineering to attach the histidine sequence to several different proteins, including one found in the nucleus and another involved in processing foreign molecules that have entered the cell. Then, the cells were pushed through a microfluidic channel at a rate of 1 million cells per second, which squeezed them sufficiently to allow the trisNTA tag in.

Until now, scientists have had to use protein tags, such as the bulky GFP, that can be genetically encoded in the cells’ DNA, or to study proteins in nonliving cells because the process of getting other fluorescent tags into the cells requires destroying the cell membrane.

“This study shows how microfluidic cell-squeezing together with specific chemical labeling can be exploited to hook various synthetic fluorophores to intracellular proteins with exquisite specificity. I foresee many applications for this approach and I have a very long list of probes that I would like to test immediately,” says Kai Johnsson, a professor of chemical sciences and engineering at the École Polytechnique Fédérale de Lausanne in Switzerland, who was not involved in the research.

With further work, including the development of new tags that target other proteins, this technique could help scientists learn much more about proteins’ functions inside living cells.

“Basically everything that happens in your cells is mediated by proteins,” Sharei says. “You can start to learn a lot about the basic biology of how a cell works, how it divides, and what makes the cancer cell a cancer cell, as far as what mechanisms go awry and what proteins are responsible for that.”

Normal cell behavior

The researchers believe that the cell squeezing technique should work with nearly any type of cell. So far, they have tried it successfully with more than 30 different types of mammalian cells.

An added benefit is that when cells undergo the squeezing procedure, they show no changes in the genes they express. In contrast, when a jolt of electricity is applied to cells to make them more permeable — a technique commonly used to deliver DNA and RNA — more than 7,000 genes are affected.

“It’s possible to assume that a squeezed cell is probably going to behave more or less normally, which is critical when you’re trying to study these kinds of processes,” Sharei says.

A company called SQZ Biotech, started by MIT researchers including Sharei, Langer, and Jensen, has licensed the cell squeezing technology and is now using it to engineer immune cells to improve their ability to attack cancer cells.

Source: http://news.mit.edu/2016/cell-squeezing-enhances-protein-imaging-0201

New class of molecular ‘lightbulbs’ illuminate MRI

160325151726_1_540x360

Duke University researchers have taken a major step towards realizing a new form of MRI that could record biochemical reactions in the body as they happen.

In the March 25 issue of Science Advances, they report the discovery of a new class of molecular tags that enhance MRI signals by 10,000-fold and generate detectable signals that last over an hour. The tags are biocompatible and inexpensive to produce, paving the way for widespread use of magnetic resonance imaging (MRI) to monitor metabolic processes of conditions like cancer and heart disease in real time.

“This represents a completely new class of molecules that doesn’t look anything at all like what people thought could be made into MRI tags,” said Warren S. Warren, James B. Duke Professor and Chair of Physics at Duke, and senior author on the study. “We envision it could provide a whole new way to use MRI to learn about the biochemistry of disease.”

MRI takes advantage of a property called spin, which makes the nuclei in hydrogen atoms act like tiny magnets. Applying a strong magnetic field, followed by a series of radio waves, induces these hydrogen magnets to broadcast their locations. Since most of the hydrogen atoms in the body are bound up in water, the technique is used in clinical settings to create detailed images of soft tissues like organs, blood vessels and tumors inside the body.

But the technique also has the potential to show body chemistry in action, said Thomas Theis, assistant research professor of chemistry at Duke and co-lead author on the paper. “With magnetic resonance in general, you have this unique sensitivity to chemical transformations. You can see them and track them in real time,” Theis said.

MRI’s ability to track chemical transformations in the body has been limited by the low sensitivity of the technique, which makes small numbers of molecules impossible to detect without using unattainably massive magnetic fields.

For the past decade, researchers have been developing methods to “hyperpolarize” biologically important molecules, converting them into what Warren calls magnetic resonance “lightbulbs.”

With this boosted signal, these “lightbulbs” can be detected even in low numbers. “Hyperpolarization gives them 10,000 times more signal than they would normally have if they had just been magnetized in an ordinary magnetic field,” Warren said.

While promising, Warren says these hyperpolarization techniques face two fundamental problems: incredibly expensive equipment — around 3 million dollars for one machine — and most of these molecular lightbulbs burn out in a matter of seconds.

“It’s hard to take an image with an agent that is only visible for seconds, and there are a lot of biological processes you could never hope to see,” said Warren. “We wanted to try to figure out what molecules could give extremely long-lived signals so that you could look at slower processes.”

Jerry Ortiz Jr., a graduate student at Duke and co-lead author on the paper, synthesized a series of molecules containing diazarines, a chemical structure which is composed of two nitrogen atoms bound together in a ring. Diazirines were a promising target for screening because their geometry traps hyperpolarization in a “hidden state” where it cannot relax quickly.

Using a simple and inexpensive approach to hyperpolarization called SABRE-SHEATH, in which the molecular tags are mixed with a spin-polarized form of hydrogen and a catalyst, the researchers were able to rapidly hyperpolarize one of the diazirine-containing molecules, greatly enhancing its magnetic resonance signals for over an hour.

Qiu Wang, assistant professor of chemistry at Duke and co-author on the paper, said this structure is a particularly exciting target for hyperpolarization because it has already been demonstrated as a tag for other types of biomedical imaging.

“It can be tagged on small molecules, macro molecules, amino acids, without changing the intrinsic properties of the original compound,” said Wang. “We are really interested to see if it would be possible to use it as a general imaging tag.”

The scientists believe their SABRE-SHEATH catalyst could be used to hyperpolarize a wide variety of chemical structures at a fraction of the cost of other methods.

“You could envision, in five or ten years, you’ve got the container with the catalyst, you’ve got the bulb with the hydrogen gas. In a minute, you’ve made the hyperpolarized agent, and on the fly you could actually take an image,” Warren said. “That is something that is simply inconceivable by any other method.”

Source: https://www.sciencedaily.com/releases/2016/03/160325151726.htm

How KPCB thinks about the future of investing in wearable technology

wearables-e1455299947895

For years now, wearable devices have promised to help us lead healthier lives, experience life in new ways, and become less dependent on our smartphones.

2015 was a very important year for wearables as the market took several important steps towards delivering on these promises.

Apple released their much anticipated Apple Watch. GoPro launched a host of new action camera products. FitBit went public with a market cap of over $6B. And Oculus and Microsoft solidified their plans for Rift and Hololens, respectively.

Despite the positive momentum, the expectations of the market have not yet been met, as many of these wearable devices will be found tucked away in a drawer or a nightstand after only a few weeks of use.

Fortunately, technology can and will solve the majority of shortcomings associated with wearables today. To use a baseball analogy, we are only in the third inning, and there is still a lot of ballgame to be played. Several of us got together in the podcast studio at KPCB to discuss our thoughts on the future of wearable technology – below is our discussion some of our takeaways.

Battery Life is King

Battery life is by far the biggest obstacle preventing broad market adoption and retention. Our wearable devices should last weeks and months, not hours and days. Power consumption of key components like processors, radios, memories, and sensors are the primary culprit.

Almost all of these components are hacked together from legacy mobile phone parts that were not designed for wearables, and therefore struggle to satisfy the product needs around power consumption (battery life), form factor (shape and size), and weight of the wearable.

Unfortunately, this ultimately leads to underwhelming wearables functionality and feature set. Let’s take a moment to reflect on the Apple Watch. This is a beautiful, best-in-class product, but our belief is that power consumption and form factor played a significant role in determining the feature set.

It appears that the Apple Watch did not have the power budget and the space availability for a cellular radio. This prevented the watch from operating independently as a standalone mobile device, instead of being an expensive peripheral to the iPhone. A unique set of components, designed from the ground-up for wearables, would have not only extended the battery life, reduced the form factor, but would have also provided the optionality for additional critical features.

Virtual and augmented reality headsets also suffer from power consumption issues. These devices require high resolution displays and optics, fast CPUs, fast GPUs and numerous always-on sensors in order to deliver a meaningful user experience.

Unfortunately today’s versions of these AR/VR headsets cannot encompass all of these components in small,  untethered form factors, forcing most companies to build a tethered companion unit for processing and power delivery. Weight is an equally important criteria. Every ounce added to the headset could create additional discomfort to the neck area. Not only do these components need to consume less power, they need to be significantly smaller and lighter.

The Next Phase of Wearables

At KPCB, we’re still huge believers in wearables. We are constantly thinking about what’s technically missing that would enable wearables to become the next multi-billion unit and transformational market.

The first step is to redesign all of the key components with a clear focus on significantly lower power – aim for N times lower, where N is an integer! Redefining the system architecture and building-block device structures for displays, processors, memories, sensors, radios and batteries is a must.

Companies developing processors, new memory technologies, new displays and next generation connectivity are well-positioned as key enablers for next generation wearables.

In order to make this next big leap, we need to begin think about these devices not as wearables, but as pervasive devices that are integrated into every facet of our lives.

Tiny in size, little to no energy consumed during operation, constantly processing, continuously mapping our environment and gathering  data, and communicating device-to-devices. In order to make this world a reality, we’ll need almost invisible components that can scavenge energy from kinetic movement and from surrounding power sources such as wireless signals.

We’ll need batteries that will rarely need to be recharged. And we’ll need new ways to embed electronics into plastic, wood, metals and fabric.

The Next Six Innings…

Every new successful hardware industry goes through a predictable cycle on the path to selling a billion units.

The first step requires a new suite of breakthrough components to meet the market demands around feature set, form factor and battery life.

This happened in the early 2000s as the PC industry made the shift from desktop computers to laptop computers. It happened again in the late 2000s as the phone industry shifted from feature phones to smartphones.

After the hardware matures, the innovation moves to software and services. Think about this within the context of the smartphone market. When the hardware was fast enough, and could be used for a full day on one charge, we were introduced to new always-on services.

The app store took off. We experienced entirely new forms of communications and the rise of mobile gaming. We saw new services around cloud backup and data security emerge.

Over the next ten years, we can  imagine a world where instead of wearing a smartband or smartwatch to track activity and heart rate, we could just put on our favorite shirt.

The buttons on that shirt would capture data from our bodies and source power from the ambient environment. These buttons would communicate to the world around us, and would never need to be recharged. Software and services would tell us when to hydrate, when to get out under the sun, when to take it easy, and when best to sleep. That’s the pervasive computing world of the future

*Ambiq Micro and Crossbar are current KPCB portfolio companies. 

Source: http://techcrunch.com/2016/03/29/you-are-what-you-wearable/

Portable ultrasounds gain popularity among specialists

AR-160319938.jpg&q=40&maxw=600&maxh=600

Hospitals are expressing more interest in high-end portable ultrasounds as the devices become more widely used by a growing number of specialties.

As manufacturers have added more advanced features to portable scanners, they’ve become more popular in the OR, trauma and specialty settings. There’s been a growing adoption of ultrasound across fields of medicine, and the portable scanners have put them in the hands of more providers.

Portable ultrasounds are defined by ECRI as a unit that is not permanently mounted to a cart. That can include everything from a small pocket-sized scanner to a full-featured large machine that can be carried and moved. A portable ultrasound isn’t necessarily inferior to a traditional unit, but cart-based units may have more advanced features, such as 3-D and 4-D ultrasound, or the ability to fuse images from other modalities, such as MRIs.

Between November and January, a portable ultrasound unit cost an average of $40,541, according to the Modern Healthcare/ECRI Institute Technology Price Index (TPI). The TPI provides monthly and annual data on pricing for 30 supply and capital items that hospitals and other provider organizations purchase, based on three-month rolling averages.

Due to changes in ECRI’s methodology since last year, the firm cannot make year-over-year price comparisons. ECRI’s rolling averages now reflect the total configured cost of the item, including all options and accessories, rather than the base price that was previously presented.

New models now support transesophageal transducers, which can be passed through the esophagus to capture images of the heart from inside the body. These specialized devices—which would replace the traditional ultrasound “wand” that would normally be waved over the chest or abdomen to view the heart—can drive up the price of a portable ultrasound from $30,000 to $50,000, depending on the vendor, according to the ECRI Institute.

Systems range from $20,000 to $150,000, with the most expensive models often being used for cardiac care and transesophageal echocardiograms. Fujifilm, General Electric Co., Mindray Medical International, Royal Philips and Siemens are the top five ultrasound manufacturers.

Anesthesiologists, emergency physicians, pain medicine physicians and rheumatologists increasingly have been drawn to the use of portable scanners, said Daniel Merton, senior project officer in ECRI’s Health Device Group. Using the device at the bedside allows specialists to gain a more targeted, immediate diagnostic exam than the more comprehensive, broad procedure that would be performed by radiology professionals.

The devices are also being written into hospital policy. It’s now standard at most facilities to use a portable ultrasound when placing a central line, Merton said.

“There are so many users beginning to (better) understand the use of ultrasound,” Merton said. “They’re easier to use, less expensive to purchase for the point-of-care and they’re allowing providers to treat patients better.”

Source: http://www.modernhealthcare.com/article/20160315/NEWS/160319938/portable-ultrasounds-gain-popularity-among-specialists

Machine learning technique boosts lip-reading accuracy

803126848_01f4ec1307_o

For human lip readers, context is key in deciphering words stripped of the full nuance of their audio cues. But a technology model for lip-reading developed at the University of East Anglia in the UK has been shown to be able to interpret mouthed words with a greater degree of accuracy than human lip readers, thanks to the application of machine learning tech to classify the visual aspect of sounds. And the kicker is the algorithm doesn’t need to know the context of what you’re discussing to be able to identify the words you’re using.

While the model remains a piece of research at this stage, there are scores of potential applications for technology that could automagically transform visual cues into accurate speech — whether it’s helping people who have audio impairments, or enhancing audio-less security video footage with additional speech data — or even to try to figure out exactly what charged word one footballer spat at another in the heat of a match…

Such a tech could also be applied as a fallback for poor audio quality on a mobile or video call. Or for automating subtitles. Or even perhaps to power a front-facing camera-based mobile ‘voice’ assistant which you wouldn’t actually have to speak to but could just discreetly mouth commands at (how cool would that be?). Safe to say, the list of applications-in-waiting for machine powered lip-reading is as long as the dictionary is deep. So there’s bags of future potential if only researchers can deliver the goods.

The UAE team behind this new machine learning training model for lip reading have been looking purely at visual inputs — so training their model on the shape of the mouth as certain sounds are spoken, without any audio input cues at all.

“We’re looking at… visual cues and saying how do they vary? We know they vary for different people. How are they using them? What’s the differences? And can we actually use that knowledge in this particular training method for our model? And we can,” says Dr Helen Bear who created the visual speech recognition tech model as part of her PhD, along with Prof Richard Harvey of UEA’s School of Computing Sciences.

“The idea behind a machine that can lip read is that the machine itself has got no emotions, it doesn’t mind if it gets it right or wrong — it’s just trying to learn. So in the paper… I’ve been showing how we can use those visual confusions to make better phoneme classifiers. So it’s a new training method,” she adds.

Dr Bear notes that a lot of current research in the lip reading field is looking both at audio and visual cues to try to improve the accuracy of machine lip reading. So the UEA model stands out on merit of focusing solely on visual speech to try to boost machine-powered lip reading.

“We were effectively pretending that that audio signal is not there at all,” she says. “The idea being you can either have a lip-reading only system or it could be used in an audio-visual system that maybe one day hopefully it would be nice if it could jump in, do the visual signals only until the audio comes back in, for example, if you’re on a Skype call and the audio goes out but you can still see somebody.”

The core challenge for lip reading techniques in general is there are — at least to the human eye — fewer visual cues than there are acoustic audio sounds humans make. Examples of sounds with confusingly similar shapes when seen on the lips are ‘/p/,’ ‘/b/,’ and ‘/m/’ — all of which typically cause difficulties for human lip readers. However UEA’s visual speech model is able to more accurately distinguish between these visually similar lip shapes.

“It turns out there are some visual distinctions between ‘/p/,’ ‘/b/,’ and ‘/m/’ but it’s not something that human lip readers have been able to achieve,” says Dr Bear. “But with a machine we are showing that those distinctions are there, they do exist and our recognizers are much better at doing it.”

“If I was to try and build a classifier to recognize just the /p/ sound what I would have done is it’s first trained on all the sounds that look the same. What we then do is we then refine that training by doing some more iterations of training which are only on the /p/ sound,” she says, discussing the training technique.

“We’re actually learning and understanding what all these visual units mean and why they differ between people and we’ve used that knowledge in order to change the conventional lip reading system and make it better. It is a significant step forward,” she adds.

‘Much better’ is still relative — with the accuracy level for lip reading remaining low. Accuracy at the word level for the model stands at between 10 and 20 per cent (i.e. for correctly identifying a word), according to Dr Bear — albeit she stresses that’s still much higher than guessing. Over a sentence it of course becomes easier to distinguish sense from an entire transcript, she adds.

“In all honesty we’re not 100 per cent sure [why it works],” she tells TechCrunch. “We just know that with our particular classifiers if we train them in the right way, with the right data, they’re not biased towards anything.

“The complexity is that understanding the science of why visual speech is as complex as it is is a much harder question than can we use machine learning to get better results. We know that machine learning is evolving all the time, and we’re getting different types of classifiers… But actually asking the hard questions of what it is they’re learning and how visual speech is and how much it varies and how we’re going to control all those variables, those are the harder questions.”

Asked to hazard a guess on how far out the research might be from being usefully commercialized in an application, she jokes: “If I worked for Google probably a lot sooner!”, before adding that any commercialization is likely to be “a fair few years away yet”.

“We’ve still go things we need to learn and understand,” she says, characterizing the research as just one piece of an interlocking series of linguistic models that will be needed enable machines to adroitly and accurately pull speech data from the twists and turns of human lips.

It’s also worth noting that the UEA model was also solely focused on the English language. So the scope of the challenge ahead to deliver on the promise of lip-reading powered applications is not to be underestimated.

Could the UEA model be combined with other predictive linguistic techniques — perhaps machine learning based next-word prediction technologies — in order to further enhance lip-reading capabilities? “That’s exactly what I love to be able to do,” she says. “To have something that robust would be amazing but that’s going to take quite a bit more work as yet. It’s not going to be going to market any time soon.”

Dr Bear is presenting the research findings at the International Conference on Acoustics, Speech and Signal Processing in Shanghai this Friday when her paper — Decoding visemes: Improving machine lip-reading — will also be published. The research was part of a three-year project, supported by the Engineering and Physical Sciences Research Council.

Source: http://techcrunch.com/2016/03/24/tech-to-read-my-lips/

Google launches new machine learning platform

next_whats_next

Google today announced a new machine learning platform for developers at its NEXT Google Cloud Platform user conference in San Francisco. As Google chairman Eric Schmidt stressed during today’s keynote, Google believes machine learning is “what’s next.” With this new platform, Google will make it easier for developers to use some of the machine learning smarts Google already uses to power features like Smart Reply in Inbox.

The service is now available in limited preview.

“Major Google applications use Cloud Machine Learning, including Photos (image search), the Google app (voice search), Translate and Inbox (Smart Reply),” the company says. “Our platform is now available as a cloud service to bring unmatched scale and speed to your business applications.”

Google’s Cloud Machine Learning platform basically consists of two parts: one that allows developers to build machine learning models from their own data, and another that offers developers a pre-trained model.
To train these machine learning models (which takes quite a bit of compute power), developers can take their data from tools like Google Cloud DataflowGoogle BigQuery,Google Cloud DataprocGoogle Cloud Storage, and Google Cloud Datalab.

“Cloud Machine Learning will take care of everything from data ingestion through to prediction,” the company says. “The result: now any application can take advantage of the same deep learning techniques that power many of Google’s services.”

The pre-trained models include existing APIs like the Google Translate API and Cloud Vision API, but also new services like the Google Cloud Speech API (you can read more about this here). The Cloud Speech API powers Google’s own voice search and voice-enabled apps. It can do speech-to-text conversion for 80+ languages.

Google stressed during today’s keynote that it wants to bring the technology it developed internally to developers and make it as easy to use as possible. At the same time, the company is also open-sourcing tools like Tensorflow to allow the community to take its internal tools, adapt them for their own uses, and improve them.

 

Source: http://techcrunch.com/2016/03/23/google-launches-new-machine-learning-platform/

Gene study uncovers ‘spectrum of mutations’ in mesothelioma

doctor-examines-lung-x-ray

In a paper published in Nature Genetics, the team – from Brigham and Women’s Hospital (BWH) in Boston, MA, and Genentech in San Francisco, CA – reports how they carried out a comprehensive genomic analysis of over 200 mesothelioma tumors.

Lead author Dr. Raphael Bueno, chief of BWH’s Division of Thoracic Surgery and co-director of the hospital’s Lung Center, says because they were able to analyze so many samples for such a rare disease, they were able to identify a “spectrum of mutations.”

He says some of the mutations they uncovered have been found in other cancers, and drugs that target them are already developed. He adds:

“No one knew before now that these mutations might also be found in mesothelioma tumors. This new work suggests that patients with such mutations may benefit from certain existing drugs.”

Genomic analysis is a growing field where scientists use cutting-edge DNA sequencing technology and information systems to identify, measure and compare the genetic information and processes that influence cell behavior.

Using such tools, scientists can map the genetic alterations that cause cells to malfunction and give rise to cancer.

Study identifies 2,500 genetic alterations

Malignant mesothelioma – often shortened to mesothelioma – is a rare but deadly cancer that arises when tumor cells form in the thin layer of tissue that covers the lung, chest wall or abdomen.
The cancer can also develop in the heart or testicles, but this is very rare.

The major cause of mesothelioma is being exposed to asbestos over a period of time. This includes people exposed to asbestos in the workplace and their family members. It usually takes 20 years or more for the cancer to develop.

The 5-year survival for mesothelioma is 5-10%. Every year in the US, over 3,200 people are diagnosed with the cancer, and around the same number die of the disease.

While aggressive surgery can help some patients, current treatments do not help those with advanced disease.

For their study, the team analyzed 216 malignant pleural mesothelioma (MPM) tissue samples and compared normal tissue to cancerous tissue. MPM is the most common type of mesothelioma – it develops in the pleura, the thin layer of tissue surrounding the lungs.

They found over 2,500 alterations in DNA and RNA (the molecules that translate DNA code into instructions for cells to follow) and identified 10 significantly mutated genes. They also captured information about immune cells at the site of the tumor.

Some mutations could be targeted with existing therapies

The researchers suggest some of the mutations they found could be targeted by therapies that already exist – such as a BCR-ABL-1 inhibitor that targets fused genes – and they could be matched to a patient’s tumor.

Knowledge about some of the other mutations could also help pathologists improve the accuracy of mesothelioma diagnosis and predict which patients will have poor or better outcomes.

In another example, the researchers identified that a subtype of mesothelioma may be a good candidate for a type of immunotherapy called anti-PD-L1.

Based on these findings, the researchers see genotyping of patients – where the precise genetic alterations that underlie their cancer are identified – as an important next step.

Dr. Bueno concludes:

“Even for a mutation that happens 1-2% of the time, it could mean the difference between life and death for a patient. We plan to continue this important research through investigator-sponsored trials evaluating the potential use of cancer immunotherapies for the treatment of mesothelioma.”

Genomic analysis is transforming the clinical landscape of cancer, bringing closer the day when individual patients are treated for their particular cluster of mutations – increasing the likelihood of a better prognosis.

Medical News Today recently reported on another example where, as a result of comprehensive genomic analyses, researchers concluded that pancreatic cancer is not one but four separate diseases and suggest that knowing which subtype of pancreatic cancer a patient has will allow doctors to give more accurate prognoses and treatment recommendations.

 

Source: http://www.medicalnewstoday.com/articles/307217.php