With each of the six biggest global consumer technology companies now deeply invested and feverishly in development, VR/AR has become too big to fail. Facebook Oculus Rift, Samsung Gear VR (powered by Oculus), Google Cardboard and its Magic Leap investment (and perhaps even Google Glass’ second coming), Sony PlayStation VR and Microsoft HoloLens are public. And the eventual entry of Apple is presumed, given hiring headlines and Tim Cook’s pronouncement that VR is not a niche.
However, as much as 2016 will see the launches of the best VR/AR to date (Oculus Rift, HTC Vive, PlayStation VR), this generation of hardware — not portable and tethered to a bulky PC or console — is merely intended for millions of early adopter enthusiasts consisting mostly of gamers.
The devices that will turn VR/AR into an interface for hundreds of millions will blend into our lives much more easily. To do that, they will embrace the one device that rules them all — the smartphone — and in doing so, pull it out of its stagnation.
They will look and feel much like ordinary glasses, featuring clear lenses with built-in waveguides transmitting the output of tiny projectors on each arm across the entire surface of each lens, covering a 120-degree field of view with 60-90 frame-per-second 1080p video (960 x 1080 resolution for each eye) delivered wirelessly by your phone. In the out-years it is even possible the projection + waveguide combination will be replaced with beyond-retina-level transparent edge-to-edge LCDs in each lens.
They’ll have the ability to go from being completely clear to displaying images that augment what you see (AR) to displaying images that cover the lenses edge-to-edge (VR), and will leverage next-generation phones for all the computing necessary.
While these displays will sacrifice some image quality and the perfect sense of virtual reality “presence” that large, fully enclosed head-mounted displays tethered to PCs can deliver, they will suffice for the vast majority of popular VR/AR applications and be dramatically easier to wear and carry, driving much greater volume than high-end HMDs.
The technical requirements here are deceptively steep, and have to do with everything from making the round-trip from the nine-way head-tracking sensors and dual cameras on the glasses to the phone for re-computing the image, to sending the uncompressed resulting video back to the glasses wirelessly at multi-gigabit-per-second speeds, to refreshing the projection in the lenses all in 20-30 milliseconds to prevent the “motion-to-photon” lag that causes VR discomfort.
In addition, the requirements to miniaturize the components on the glasses to give them the rough weight, portability and style of the glasses you’re used to are serious.
Although these hurdles cannot be met in 2016, early prototypes of everything required will exist this year (see the likes of Lumus in-glass projection modules, WHDI multi-Gb/s wireless HDMI point-to-point transmission and Apple’s A9X processors), making 2018 consumer products plausible.
Phones and glasses as best friends
With this approach, the glasses benefit from the portability, connectivity, computing, touch and voice control the phone can deliver, and the phone benefits from the display options (bigger than any display you use today; virtual 120″ television set for Netflix anyone?) and new applications the glasses make possible (spherical and 3D spherical photos and video, and the kind of casual VR entertainment that ustwo, Resolution Games and Oculus Story Studio are pioneering).
This is why VR/AR will not be a competitive platform to mobile, but an interface and ability extension of it, and therefore a demand driver.
How big can this “medium on top of a medium” get? Looking at the market for tablets may be instructive as an upper bound, as prices ($500-$800) — and to a lesser degree use cases — are similar:
Since its introduction in 2010, nearly 300 million iPads have been sold.
Total annual tablet sales are predicted to be 300 million units by 2018.
An installed base eight years after introduction of 5 billion units by 2018.
Taking out 2-in-1s from the tablet installed-base projection (as it is a different, more general purpose, use case), and cutting the remainder in half because glasses may not go as far into the low end as tablets, an installed base of more than 500 million units for glasses by 2026 is plausible. That’s roughly twice as big as video game consoles and half as big as tablets.
Projections for the big players
If things go in this direction, here’s how it may play out for The Big Six:
Apple. Wades into the premium part of the market for glasses in late 2018, paired with the iPhone 8, which will have the necessary processing and wireless communication to be the furnace for the glasses (and may extend the rumored multi-lens system of the iPhone 7 to be capable of the spherical and spherical 3D photos and videos that shine on glasses).
The glasses will help with demand for iPhone and bring new life to the $499-$799 price points occupied by iPads, which have flat-lined. For extra credit, Apple Glasses 2 will play well with Apple Car 1 (~2020). Projection: Majority of category profits — again.
Google. Will continue to lead the way on breadth with Cardboard, but these 2-3 minute experiences are low-end tasters. They’ll compete on the high end (paired with Android phones) with some combo of Magic Leap and/or a revived Google Glass as the Nexus showpieces of the segment. Projection: Platform for volume — again.
Facebook. On the hardware side, Oculus will pair with both Google Android and Apple iPhone, bifurcating its product line into high-end true presence tethered headsets and more mainstream glasses with Oculus Rift 3 in 2018. On the software side, we’ll access the vast majority of VR/AR content (photos, videos, games) through one or more of Facebook’s services, as we do for “flatland” content today. Projection: Our portal(s) to the world — again.
Samsung. Partners with Google and Facebook to play defense. Projection: Volume hardware, low profits — again.
Sony. Wades in early with PlayStation VR and sticks with it. Projection: Wins the second-division battle (e.g., consoles) — again.
Microsoft. Trying to throw the ball downfield the farthest and the earliest with the standalone HoloLens headset (everything is on-board, including processing) that is neither fish (high-end VR) nor fowl (portable). Projection: Impressive , but not winning , technology — again.
The more things change, the more they stay the same.
Researchers believe they have made a breakthrough in the diagnosis of inherited heart conditions, after developing a rapid, simple blood test that accurately can detect all known genes associated with such disorders. The new blood test can detect 174 known genes for inherited heart conditions, say researchers.
In the Journal of Cardiovascular Translational Research, researchers from the UK and Singapore reveal how the test – called the TruSight Cardio Sequencing Kit – can identify 174 genes related to 17 inherited heart conditions. These conditions include aortic valve disease, structural heart disease, long and short QT syndrome, Noonan syndrome, familial atrial fibrillation and most cardiomyopathies.
Inherited heart conditions are caused by gene mutations that have been passed down from relatives. If a mother possesses one of these faulty genes, there is a 50% chance that they will pass the mutation on to their child.
While it is possible to have one of these gene mutations and never develop the associated heart condition, the gene significantly increases risk for the disorder.
Genetic testing is key to identifying such mutations, enabling early diagnosis of inherited heart conditions and allowing patients to take steps to lower their risk of sudden death from such disorders.
But according to lead researcher Dr. James Ware, of the National Heart and Lung Institute at the MRC Clinical Sciences Centre at Imperial College London, UK, current genetic tests are only capable of identifying small numbers of genes, which means they often overlook gene mutations that could be key for diagnosing an inherited heart condition.
Could the TruSight Cardio Sequencing Kit address this problem?
Blood test identified all gene mutations with up to 100% accuracy
The new test used next-generation sequencing to simultaneously identify 174 genes known to increase the risk of 17 inherited heart conditions. It works by analyzing the DNA in patients’ blood samples.
Dr. Ware and colleagues assessed the effectiveness of the test in the new study by using it to analyze the blood samples of 348 participants from the National Heart Centre Singapore.
The team found that the test was able to quickly identify all gene mutations in the blood samples that were associated with the 17 inherited heart conditions with up to 100% accuracy.
The researchers say their study shows the new test is faster and more reliable than current genetic tests and will allow quicker, more reliable and more cost-effective diagnosis of inherited heart disorders.
Commenting on the findings, Prof. Peter Weissberg, medical director of the UK’s British Heart Foundation – which helped fund the study – says:”As research advances and technology develops, we are identifying more and more genetic mutations that cause these conditions. In this rapidly evolving field of research the aim is to achieve ever greater diagnostic accuracy at ever-reducing cost. This research represents an important step along this path. It means that a single test may be able to identify the causative gene mutation in someone with an inherited heart condition thereby allowing their relatives to be easily tested for the same gene.”
Test ‘increasing number of families who benefit from genetic testing’
The new test has already been implemented at the Royal Brompton & Harefield National Health Service (NHS) Foundation Trust in the UK, where the researchers say it is successfully assessing 40 patients per month for an inherited heart condition.
In the US alone, around 100,000 people die from sudden cardiac arrest each year as a result of inherited heart conditions.
The researchers hope that their new test will soon be in clinical use across the globe, aiding the early diagnosis and treatment of inherited heart conditions for some patients and providing peace of mind for others.
“Without a genetic test, we often have to keep the whole family under regular surveillance for many years, because some of these conditions may not develop until later in life. This is hugely costly for both the families and the health system,” notes Dr. Ware.
“By contrast, when a genetic test reveals the precise genetic abnormality causing the condition in one member of the family, it becomes simple to test other family members,” he continues.
“Those who do not carry the faulty gene copy can be reassured and spared countless hospital visits. This new comprehensive test is increasing the number of families who benefit from genetic testing.”
Changes in the types and activities of human gut bacteria could lead to earlier diagnoses of type 2 diabetes, according to a study of identical twins, findings of which are published in Genome Medicine.
Imbalances in the gut microbiota have been linked with a number of conditions, including type 2 diabetes.
However, previous studies have only compared healthy individuals with people already diagnosed with type 2 diabetes.
But changes in the microbiota may occur before type 2 diabetes becomes detectible by other means.
Curtis Huttenhower and colleagues – from the Broad Institute of Massachusetts Institute of Technology (MIT) and Harvard, as well as Seoul National University in South Korea – wanted to find out whether such early changes occur.
They set out to identify links between type 2 diabetes biomarkers, changes in gut microbiota and host genetics.
Identical twins offer unique research opportunities
Participants were 20 healthy identical Korean twins aged 30-48 years, who were already involved in the Healthy Twin Study, in South Korea.
As identical – or monozygotic – twins share the same genes, studying them enables scientists to investigate aspects of disease linked to the gut microbiome in isolation from genetic traits.
The researchers collected data such as age, height and weight, body mass index (BMI), fasting blood sugar (FBS) and details of diet and lifestyle.
They also took 36 fecal samples in order to study the microbial community structure.
Sixteen individuals provided one sample each at the start of the study and one each between 12-44 months later. Two pairs of twins, or four individuals, were only able to provide the first sample.
The sampling method made it possible to observe changes between individuals and over time.
None of the participants had a prior diagnosis of type 2 diabetes, but the types and levels of disease markers varied widely, from healthy to near-clinical.
This meant that the researchers could compare the functioning and composition of the microbiome at different stages before onset of type 2 diabetes.
Gut microbiota point to future type 2 diabetes
Changes were identified in both composition and function of the participants’ gut microbiome.
They included a decrease in Akkermansia muciniphila (A. muciniphila), inversely associated with BMI. There were also functional changes relating to BMI, FBS and triglycerides that suggested oxidative stress due to immune activation or inflammation.
Similar changes have been seen before in patients with chronic type 2 diabetes and inflammatory bowel diseases.
One unexpected finding was that while twins had the same species of microorganisms living in their guts, the strains of the species were different.
Huttenhower says:”It suggests that twins are initially colonized by the same bugs in infancy, due perhaps to shared environment or genetics and then retain those organisms long enough to begin to diverge through short-term evolution. If true, this can be studied directly in larger twin cohorts, and it would help us understand how the microbiome develops beyond diabetes alone in a wide variety of conditions.”
The researchers hope that the methodology and findings will contribute to tracking changes that take place before and after type 2 diabetes becomes apparent.
They also speculate that microbial or immune responses play a causative role, although this was not a part of the current study.
Given the small sample size, the researchers suggest that larger cohorts should be examined to confirm the findings, but they hope that the methods used will be useful in future investigations.
A rapid, accurate test that can detect biomarkers of lung cancer in saliva is soon to be trialed in patients.
The news marks a milestone in over 10 years of research led by oral cancer and saliva diagnostics researcher Prof. David Wong, of the School of Dentistry at the University of California-Los Angeles (UCLA).
Prof. Wong and colleagues have been working on a method called “liquid biopsy” that detects circulating tumor DNA in bodily fluids such as saliva and blood.
Liquid biopsy holds the promise of rapid, less invasive identification of cancers and easier tracking of disease progress during treatment.
Prof. Wong described the prototype in a news briefing at the 2016 Annual Meeting of the American Association for the Advancement of Science (AAAS), which is taking place in Washington, DC.
The device uses electric field-induced release and measurement (EFIRM) to detect non-small cell lung cancer (NSCLC) biomarkers in saliva.
The EFIRM device analyzes the contents of exosomes – tiny bags of molecules that cells release now and again. The device forces the exosomes to release their contents and carries out bio-recognition of the released biomolecules at the same time.
High accuracy compared with current sequencing technology
In a study published in 2013, Prof. Wong and colleagues described using EFIRM to show that saliva contains tumor-shed exosomes, which have previously been found in blood.
The approach has a high accuracy compared with current sequencing technology, says Prof. Wong, explaining that the trial in lung cancer patients is taking place in China this year. The study is a collaboration between UCLA and West China Hospital of Sichuan University.
Prof. Wong says the test takes only 10 minutes to give a result and could be done in the doctor’s office.
He sees it forming part of a set of diagnostic tools. For example, should a lung X-ray show a suspicious nodule, then the doctor could use the saliva test to rapidly find out if cancer is likely.
The test works by detecting genetic mutations in a protein called epidermal growth factor receptor (EGFR). The protein normally helps cells grow and divide, but some NSCLC cells have too much EGFR, which makes them grow faster. Drugs called EGFR inhibitors that block the protein could be ordered promptly by a clinician.
Prof. Wong and colleagues have also been looking at the possibility of a saliva test for detecting mutations linked to oropharyngeal cancers – cancers of the mouth and the back of the throat.
Gypsyamber D’Souza, associate professor of epidemiology at the Johns Hopkins University Bloomberg School of Public Health, Baltimore, MD, was also at the news briefing with Prof. Wong. She says enthusiasm for the liquid biopsy must be tempered by the complexity of the cancer process and the potential usefulness of the technique for specific cancers.
Prof. Wong’s announcement follows a recent Medical News Today report about the development of a salivary gland test for early Parkinson’s disease. The test, which uses a biopsy of gland tissue, could provide an accurate and timely diagnosis of a disease for which there is currently no way to detect in its early stages.
One of the drawbacks of DNA aptamers – synthetic small molecules that show promise for detecting and treating cancer and other diseases – is they do not bind readily to their targets and are easily digested by enzymes in the body. Now, scientists have found a way to produce DNA aptamers without these disadvantages.
The team – from the Institute of Bioengineering and Nanotechnology (IBN) at Agency for Science, Technology and Research (A*STAR) in Singapore – describes how they developed and tested the improved DNA technology in the journal Scientific Reports.
IBN Executive Director Prof. Jackie Y. Ying says the team created “a DNA aptamer with strong binding ability and stability with superior efficacy,” and:”We hope to use our DNA aptamers as the platform technology for diagnostics and new drug development.”
Aptamers are a special class of synthetic ribonucleic acid (RNA) or deoxyribonucleic acid (DNA) molecules that are showing promise for clinical use.
These small molecules could be ideal for drug applications because they can be made for highly specific targets – such as proteins, viruses, bacteria and cells.
Drawbacks of current DNA aptamers
Once aptamers are engineered for a specific target, they bind to it and block its activity.
They are the chemical equivalent of antibodies, except, unlike the antibodies currently used in drug development, they do not cause undesirable immune responses and could be easier to mass produce at high quality.
The first aptamer-based drug – an RNA aptamer for the treatment of age-related macular degeneration (AMD) – was approved in the US in 2004, and several other aptamers are currently being evaluated in clinical trials.
However, no DNA aptamer has yet been approved for clinical use because the ones currently developed do not bind well to molecular targets and are easily digested in the bloodstream by enzymes called nucleases.
In their paper, lead author Dr. Ichiro Hirao, a principal research scientist at IBN, and colleagues describe how they overcame these two problems.
‘Unnatural base’ and ‘mini-hairpin’ remove DNA aptamer disadvantages
To overcome the problem of weak binding, the team added a new artificial component – called an “unnatural base” – to a standard DNA aptamer, which typically has four components.
The paper describes how the addition of a fifth unnatural base component to the DNA aptamer strengthened its binding ability by 100 times.
To prevent the aptamer from being easily digested by enzymes, the team added a small piece of DNA that they call a “mini-hairpin DNA.”
Dr. Hirao says mini-hairpin DNAs are made of small DNA fragments that form a compact, stem-loop structure, like a hairpin, and this is what makes them stable.
Typically, DNA aptamers do not last longer than an hour in blood at room temperature because they are broken down by nucleases. But the team found the addition of the mini-hairpin DNA could help DNA aptamers survive for days – making them more appealing for drug development.
In their paper, the scientists describe how their modifications improved a DNA aptamer that targets a cell-signaling protein called interferon gamma.
Lab tests showed the improved aptamer survived in human blood at 37 °C after 3 days and “sustainably inhibited the biological activity” of interferon gamma, note the authors.
Dr. Hirao says their modifications show it is possible to generate DNA aptamers with great promise for clinical use: they are potentially more effective in their action, cheaper to produce and have fewer adverse side effects than conventional methods. He concludes:”The next step of our research is to use the aptamers to detect and deactivate target molecules and cells that cause infectious diseases, such as dengue, malaria and methicillin-resistant Staphylococcus aureus (MRSA), as well as cancer.”
In December 2015, Medical News Today learned how researchers from the University of Texas at Arlington are developing a way to detect cancer cells using electronic chips coated with RNA aptamers. The team hopes it will lead to a tabletop tool that offers doctors cheaper and faster tests for disease prediction.
A new study, published in Neurology, finds plaques in the brains of middle-aged people who have experienced head injuries. These amyloid plaques match those found in Alzheimer’s, but their spatial distribution differs.
According to an editorial that is published alongside the present research, visitors to the emergency department for traumatic brain injuries (TBI) have risen 70% in the last 10 years.
Today, between 2 and 5 million Americans are estimated to live with a TBI-related disability.
According to the National Institute of Neurological Disorders and Stroke, a TBI occurs:
“When a sudden trauma causes damage to the brain.”
TBIs can be generated in any number of ways, from a sporting incident to a workplace calamity. They are caused when the head strikes a solid mass, or by an object penetrating the skull.
The impact of brain injuries
Individuals who experience a TBI can have a multitude of medical issues, varying in gravity. Prognosis depends on a number of factors, including the severity of the injury, where in the brain the impact occurs and the age of the patient.
Around half of TBIs will require surgery to repair ruptured blood vessels or bruised brain tissue. Some individuals will face cognitive problems or difficulties processing sensory information. Others still might have difficulty communicating or display mental health issues such as anxiety or depression.
Another long-term risk for TBI patients is dementia. The mechanisms behind this relationship are unclear, but the current research makes some headway into understanding how this might occur.
Brain injury and Alzheimer’s
Researchers at Imperial College London in the UK, led by Prof. David Sharp, took an in-depth look at the brains of middle-aged individuals who had suffered a TBI.
The study took brain scans of nine individuals with moderate to severe TBIs. The average age of the group was 44, and their brain injuries had occurred between 11 months and 17 years previously.
The researchers utilized two types of scan: PET scans (positron emission tomography) and MRI scans (magnetic resonance imaging). The PET scans detected amyloid plaques in the brain and the MRI scans searched out evidence of cellular damage resulting from the trauma.
The TBI group’s scans were compared with 10 people with Alzheimer’s and 9 healthy control subjects. Commenting on the results, Prof. Sharp says:
“The areas of the brain affected by plaques overlapped those areas affected in Alzheimer’s disease, but other areas were involved. It suggests that plaques are triggered by a different mechanism after a traumatic brain injury. The damage to the brain’s white matter at the time of the injury may act as a trigger for plaque production.”
The team found that both the Alzheimer’s and the brain injury groups had amyloid plaques in the posterior cingulate cortex. This highly connected and metabolically active brain region is known to be involved in the early stages of Alzheimer’s progression.
Interestingly, the TBI group, but not the Alzheimer’s group, also showed plaques in the cerebellum.
Future research possibilities
To date, medications for Alzheimer’s can only minimize certain symptoms and slow its progression. This is not adequate; hundreds of research teams are investigating better solutions on a global scale.
A vital aspect of Alzheimer’s research is the creation of reliable animal models. Medical News Today recently asked Dr. Gregory Scott, the study’s first author, whether his research could be useful in this regard:
“Potentially, however, a well-established challenge is that many animals do not generate amyloid in the same way as humans after a brain trauma. As you say, there are already animal models of Alzheimer’s disease, of course, and in fact we are involved in a study looking at TBI in animal models of Alzheimer’s disease.”
The team cautiously notes that the current study is a relatively small-scale trial, however, Prof. Sharp holds out hope that it could lead to more. He believes that if a substantial link can be found between brain injury and the onset of Alzheimer’s disease, it might help neurologists uncover treatment and prevention strategies to reduce the progression of Alzheimer’s at an earlier stage.
When asked about future research, Dr. Scott told MNT that he is currently looking at novel ways to reduce inflammation after TBI, and investigating the relationship between brain inflammation and white matter damage:
“We have completed another PET study recently looking at neuroinflammation after TBI and the effect of the antibiotic minocycline on the signal. We are also looking at other biomarkers of chronic injury.”
TBIs can be serious and life-changing events. Research will, no doubt, lead to significant improvements in the way brain injury is treated. MNT recently covered research that found a link between ADHD and traumatic brain injury.
Proton therapy is as effective as standard photon or X-ray radiotherapy at treating the most common type of malignant brain tumor in children – and causes fewer long-term side effects.
This was the conclusion of a trial led by Massachusetts General Hospital (MGH) in Boston and published in The Lancet Oncology.
Lead and corresponding author Torunn Yock – director of pediatric radiation oncology at MGH, and an associate professor of radiation oncology at Harvard Medical School in Boston, MA – says:
“Our results indicate that proton therapy maintains excellent cure rates in pediatric medulloblastoma while reducing long-term side effects, particularly in hearing and neurocognitive function, and eliminating cardiac, pulmonary, gastrointestinal and reproductive effects.”
Medulloblastoma is a fast-growing brain tumor that occurs mostly in children and accounts for 18% of childhood brain tumors. It develops in the cerebellum at the base of the brain.
In most cases, medulloblastoma can be treated successfully with a combination of surgery, chemotherapy and radiotherapy, but because of its position in the brain, the treatment often results in long-term side effects.
Less collateral damage to healthy tissue
The aim of radiotherapy is to kill all malignant cells to eliminate the tumor and stop it growing back. While conventional photon radiotherapy based on X-rays can do this, there is a high risk of collateral damage because the beam – although it is directed at the tumor – also delivers radiation to tissue in front and behind it.
Such damage may not make a big difference if there is plenty of surrounding tissue whose loss does not impair function. But in the brain – particularly in the brain of a child – every tiny bit of healthy tissue counts and any loss is more likely to impair important functions.
Proton therapy – also known as proton beam therapy – uses a proton beam, with which it is possible to more precisely confine radiation to the tumor. The result is a much smaller chance of killing healthy surrounding tissue.
The following video from IBA – a company that manufactures medical devices for cancer treatment – explains how proton beam therapy works:
Prof. Yock explains that although proton therapy is still not widely available in the US and other countries, more and more doctors value its potential for reducing treatment side effects, particularly in children, and at “experienced centers,” she notes, “proton therapy has a proven track record of treatment success and safety.”
However, while proton therapy is valued because it appears to reduce adverse side effects, the authors note that nobody had actually done a long-term follow-up of children treated for medulloblastoma with proton therapy.
So for their trial, the team enrolled 59 patients of average age 6.5 years – ranging from 3-21 – who underwent proton therapy for medulloblastoma at the MGH between 2003-2009 following surgery to remove as much of the tumor as possible. All patients had also received chemotherapy before, during or after proton therapy.
At the start of the study and at follow-up visits, the investigators measured the patients’ hearing, mental function, hormone levels, height and weight. Thirteen patients died over the follow-up, which lasted up to 8 years.
Survival rates similar, but side effects reduced
When they analyzed the results, the researchers found that survival rates and the incidence and type of tumor recurrence were similar for the proton therapy patients to what has been reported for photon radiotherapy in other studies.
However, there were reductions in side effects. For example, 3 years after treatment, 12% of patients had significant hearing loss, and this proportion increased to 16% at 5 years. These figures compare favorably with the 25% reported in studies using photon radiotherapy, the authors note.
The impact of proton therapy on some mental functions – such as verbal comprehension and processing speed – was also less serious than that reported with photon radiotherapy. The authors note that these effects occurred primarily in children who were 8 years old or under when they received proton therapy.
Effects on hormone levels were comparable to what has been reported with photon therapy, with 63% of patients showing deficits in any hormones 7 years after treatment.
But a significant result is the absence of any heart, lung, intestinal, seizure or secondary tumor effects in the proton therapy patients. All these side effects have been reported in photon therapy studies.
The authors conclude that because their findings show proton radiotherapy has “acceptable toxicity” and has similar survival outcomes to conventional radiotherapy, it could be an alternative to photon-based treatments.
The team is now studying the quality-of-life differences between proton and photon treatment, but Prof. Yock nonetheless says:
“I truly believe that – particularly for the youngest children – the ability to offer them proton therapy can make a big difference in their lives.”
Magic Leap has been successful in keeping a lot of its technology under wraps — the augmented and virtual reality technology company has yet to release a product — but its fundraising has been an open secret. Today, however, one part of it is finally being confirmed: the startup announced that it has raised $793.5 million in a Series C round of funding, at a valuation that a spokesperson tells us was $3.7 billion pre-money, and $4.5 billion post-money.
That’s right: a $4.5 billion valuation, with a commercial product yet to launch.
This latest investment was led by China’s e-commerce powerhouse Alibaba, with participation also from existing investors Google and Qualcomm Ventures. Other new investors in this round are a who’s-who of finance with a little dose of entertainment: they include Warner Bros, Fidelity, J.P. Morgan, Morgan Stanley, T. Rowe Price and Wellington Management Co. (Am I the only one surprised that Disney is not even partly involved?)
Today’s announcement doesn’t give us any peek into when we might finally see a product launch from the company — we are asking the question, though. What we do get is a basic description, from the founder of how Magic Leap, of how the startup proposes to fit its tech into the current mix of digital media.
“Here at Magic Leap we are creating a new world where digital and physical realities seamlessly blend together to enable amazing new experiences. This investment will accelerate bringing our new Mixed Reality Lightfield experience to everyone,” said Rony Abovitz, Founder, President, and CEO of Magic Leap, Inc. in a statement. “We are excited to welcome Alibaba as a strategic partner to help introduce Magic Leap’s breakthrough products to the over 400 million people on Alibaba’s platforms.”
As part of the round, Alibaba’s Joe Tsai will join Magic Leap’s board.
“We invest in forward-thinking, innovative companies like Magic Leap that are developing leading products and technologies,” said Joe Tsai, Executive Vice Chairman at Alibaba. “We believe Alibaba can both provide support to and learn from such a partner, and we look forward to working with the Magic Leap team.”
The Alibaba participation, apart from being financial, seems to me that it could potentially have strategic elements, too: one of the larger barriers for more growth in e-commerce is the disconnect between buying physical items but being unable to see them with your own eyes.
Better AR experiences could be one way of giving people a closer look at what they are buying in the virtual world — see the video above and imagine that instead of a cute elephant, the object in your hands is a camera, or a piece of jewellery, or a bathing suit that you can subsequently ‘try on’. It could even be used to create virtual “stores” where people could walk along aisles to browse and shop as they do in the real, physical world.
If the funding sounds insane — especially in today’s climate where some of the most promising tech startups are getting marked down, and we hear many canaries telling us that there is more to come — from what I understand, there is a method to this madness.
Magic Leap has already been working for years on its technology. That work is not just about writing code, but about processing power and essentially creating completely new environments to develop and test what is being developed. It’s probably still a gamble, and as with all else in tech at some point what costs a fortune will drop drastically in price, but for now this is the price of entry for something that holds the promise of being transformational — or so the thinking goes.
Other investors in the company include Legendary Entertainment, KKR, Vulcan Capital, Kleiner Perkins Caufield & Byers, Andreessen Horowitz, Obvious Ventures, and others. Prior to this round, Magic Leap had raised $592 million.
A large and international meeting on the ethics of human-genome editing is poised to begin — and researchers are curious about how perceived differences in attitudes will play out.
“We’re hoping to sort of take the temperature of the world,” says David Baltimore, the virologist at the California Institute of Technology in Pasadena who is chairing the International Summit on Human Gene Editing. It runs 1–3 December in Washington DC.
Jointly organized by the US National Academy of Sciences, the US National Academy of Medicine, the Chinese Academy of Sciences and the UK Royal Society, the meeting is expected to draw representatives from more than 20 countries, including India, Sweden and Nigeria.
The popularity of the genome-editing tool CRISPR–Cas9, which uses bacterial enzymes to cut genomes in precise spots to disrupt or repair troublesome genes, has sparked an ethical debate — and many believe that the time is ripe for an international discussion.
In January, Baltimore and a small group of scientists gathered in Napa, California, to discuss issues surrounding genome editing, including rumours that researchers had already edited human embryos. Some consider the editing of any reproductive cell as contentious because the changes could be passed to future generations. Concerns escalated in April, when researchers in China announced that they had edited human embryos — although they had deliberately used non-viable embryos that could not result in a live birth1.
Baltimore and his colleagues then approached Ralph Cicerone, president of the National Academy of Sciences, with the idea of holding an international summit. “Everyone knew that whatever anybody did had to be inherently international,” Cicerone says. “There are really strong efforts in so many countries that could employ this new technology.”
Zhihong Xu, a plant biologist at Peking University who will represent the Chinese Academy of Sciences at the meeting, is curious about whether perceived differences in attitude — in particular between the United States and China — are real. “I believe that this is an issue for all of us to consider seriously together,” he says.
Cicerone hopes that the meeting will illuminate any scientific, ethical and cultural differences in how countries think about genome editing — and perhaps even lead to beginnings of an international consensus on outstanding scientific questions, research priorities and ethical guidelines.
But such a consensus would only be the start of a broader discussion, Cicerone cautions. Eventually the health industry, disease lobby groups, members of the public and governments of the many nations involved, will need to feed into decisions. “As much work as we’ve put into this meeting,” Cicerone says, “it really is only a first big step.”
Doctors might get even better at detecting tumors in breast cancer patients early, thanks to pressure-sensitive rubber gloves that supercharge their sense of touch. But the sensors that power those gloves could be useful in all kinds of non-medical scenarios, too.
Getting a regular exam from your doctor is still one of the most effective ways of catching the signs of breast cancer early, but it’s easy to miss the telltale hardness of a tumor when a rubber surgical glove is involved. That’s why a team of researchers led by Dr. Sungwon Lee and Professor Takao Someya of the University of Tokyo’s Graduate School of Engineering has developed a new type of pressure sensor which is thin and resilient enough to fit into a glove.
Pressure sensors flexible enough to mold themselves to the contours of a human hand have been available for awhile now, but they can’t handle bending, twisting, or wrinkling while still giving accurate measurements. Using organic transistors made of carbon nanontubes, graphene, carbon, and oxygen, the University of Tokyo team was able to address this problem, creating a transparent sensor just 8 micrometers thick—one-fifth the thickness of a human hair—that can measure pressure in 144 places at once. In conjunction with the right software, these sensors could be used in standard surgical gloves to help doctors detect tumors by touch alone.
But according to the team, this same technology has just as much potential for implantable and wearable devices. Sadly, they didn’t go as far as to name them, but it’s easy to imagine the possibilities. Just a few uses that come to mind include a smart tattoo that could also function as a touchpad, touch sensitive clothing that can go through the wash, or pressure-sensing VR gloves as thin as the ones you use to do the dishes that can detect how you’re moving your fingers. Saving lives might be just the start for this technology.