Even the most advanced security teams have a hard time staying ahead of cyberattacks. That’s why Cylance has developed artificial intelligence algorithms to seek out vulnerabilities in computer networks and address them. The Irvine, Calif., company is growing fast and has raised a nine-figure Series D to grow even faster.
CylanceProtect identifies and prevents zero days — that is, holes in software not yet known to the vendor — and other malware and advanced threats, thus protecting customers from downtime, distraction and brand tarnishing. The company says it is serving more than 1,000 customers, growing 785 percent in users and 1,089 percent in product billings since its introduction, “propelling the company to achieve its mission many years earlier than anticipated.”
Cylance has closed $100 million in financing to expand its sales, marketing and engineering programs for its endpoint protection go-to-market strategies. Funds managed by Blackstone Tactical Opportunities and Insight Venture Partners led the round, with follow-on investments by the company’s existing investors, which include DFJ Growth, Fairhaven Capital Partners, and Khosla Ventures, as well as the Blackstone Group, Capital One Growth Ventures, Dell Ventures, Draper Nexus Ventures, KKR & Co. and Ten Eleven Ventures.
Including this latest round, Cylance has raised $177 million total.
“We founded Cylance almost four years ago with a singular mission: protect those who cannot protect themselves, and empower those who can,” CEO Stuart McClure said in a statement. “Our goal of reinventing endpoint security by using machine learning to think like a cyber hacker has been achieved, and we now must ensure that it is put in the hands of security leaders inside enterprises, organizations, governments and small businesses as quickly as possible.”
McClure previously sold his cybersecurity firm Foundstone in 2004 to McAfee, where he spent several years as an executive, including as CTO, before founding Cylance.
“Cylance’s strong track record, including at Blackstone portfolio companies, deepens our conviction in the value that this platform can offer across sectors,” said Viral Patel, a managing director in Blackstone’s Tactical Opportunities.
“Cylance’s team and technology have delivered on exactly what they’ve promised: elegant and effective prevention at the endpoint, in addition to impressive customers and growth,” added Insight managing director Mike Triplett.
If there’s one technology that promises to change the world more than any other over the next several decades, it’s (arguably) machine learning.
By enabling computers to learn certain things more efficiently than humans, and discover certain things that humans cannot, machine learning promises to bring increasing intelligence to software everywhere and enable computers to develop new capabilities –- from driving cars to diagnosing disease –- that were previously thought to be impossible.
While most of the core algorithms that drive machine learning have been around for decades, what has magnified its promise so dramatically in recent years is the extraordinary growth of the two fuels that power these algorithms – data and computing power.
Both continue to grow at exponential rates, suggesting that machine learning is at the beginning of a very long and productive run.
As revolutionary as machine learning will be, its impact will be highly asymmetric. While most machine learning algorithms, libraries and tools are in the public domain and computing power is a widely available commodity, data ownership is highly concentrated.
This means that machine learning will likely have a barbell effect on the technology landscape. On one hand, it will democratize basic intelligence through the commoditization and diffusion of services such as image recognition and translation into software broadly. On the other, it will concentrate higher-order intelligence in the hands of a relatively small number of incumbents that control the lion’s share of their industry’s data.
For startups seeking to take advantage of the machine learning revolution, this barbell effect is a helpful lens to look for the biggest business opportunities. While there will be many new kinds of startups that machine learning will enable, the most promising will likely cluster around the incumbent end of the barbell.
Democratization of Basic Intelligence:
One of machine learning’s most lasting areas of impact will be to democratize basic intelligence through the commoditization of an increasingly sophisticated set of semantic and analytic services, most of which will be offered for free, enabling step-function changes in software capabilities. These services today include image recognition, translation and natural language processing and will ultimately include more advanced forms of interpretation and reasoning.
Software will become smarter, more anticipatory and more personalized, and we will increasingly be able to access it through whatever interface we prefer – chat, voice, mobile application, web, or others yet to be developed. Beneficiaries will include technology developers and users of all kinds.
This burst of new intelligent services will give rise to a boom in new startups that use them to create new products and services that weren’t previously cost effective or possible. Image recognition, for example, will enable new kinds of visual shopping applications. Facial recognition will enable new kinds of authentication and security applications. Analytic applications will grow ever more sophisticated in their ability to identify meaningful patterns and predict outcomes.
Startups that end up competing directly with this new set of intelligent services will be in a difficult spot. Competition in machine learning can be close to perfect, wiping out any potential margin, and it is unlikely many startups will be able to acquire data sets to match Google or other consumer platforms for the services they offer. Some of these startups may be bought for the asset values of their teams and technologies (which at the moment are quite high), but most will have to change tack in order to survive.
This end of the barbell effect is being accelerated by open source efforts such as OpenAI as well as by the decision of large consumer platforms, led by Google with TensorFlow, to open source their artificial intelligence software and offer machine learning-driven services for free, as a means of both selling additional products and acquiring additional data.
Concentration of Higher-Order Intelligence:
At the other end of the barbell, machine learning will have a deeply monopoly-inducing or monopoly-enhancing effect, enabling companies that have or have access to highly differentiated data sets to develop capabilities that are difficult or impossible for others to develop.
The primary beneficiaries at this end of the spectrum will be the same large consumer platforms offering free services such as Google, as well as other enterprises in concentrated industries that have highly differentiated data sets.
Large consumer platforms already use machine learning to take advantage of their immense proprietary data to power core competencies in ways that others cannot replicate – Google with search, Facebook with its newsfeed, Netflix with recommendations and Amazon with pricing.
Incumbents with large proprietary data sets in more traditional industries are beginning to follow suit. Financial services firms, for example, are beginning to use machine learning to take advantage of their data to deepen core competencies in areas such as fraud detection, and ultimately they will seek to do so in underwriting as well. Retail companies will seek to use machine learning in areas such as segmentation, pricing and recommendations and healthcare providers in diagnosis.
Most large enterprises, however, will not be able to develop these machine learning-driven competencies on their own. This opens an interesting third set of beneficiaries at the incumbent end of the barbell: startups that develop machine learning-driven services in partnership with large incumbents based on these incumbents’ data.
Where the Biggest Startup Opportunities Are:
The most successful machine learning startups will likely result from creative partnerships and customer relationships at this end of the barbell.
The magic ingredient for creating revolutionary new machine learning services is extraordinarily large and rich data sets. Proprietary algorithms can help, but they are secondary in importance to the data sets themselves.
What’s critical to making these services highly defensible is privileged access to these data sets. If possession is nine tenths of the law, privileged access to dominant industry data sets is at least half the ballgame in developing the most valuable machine learning services.
The dramatic rise of Google provides a glimpse into what this kind of privileged access can enable.
What allowed Google to rapidly take over the search market was not primarily its PageRank algorithm or clean interface, but these factors in combination with its early access to the data sets of AOL and Yahoo, which enabled it to train PageRank on the best available data on the planet and become substantially better at determining search relevance than any other product.
Google ultimately chose to use this capability to compete directly with its partners, a playbook that is unlikely to be possible today since most consumer platforms have learned from this example and put legal barriers in place to prevent it from happening to them.
There are, however, a number of successful playbooks to create more durable data partnerships with incumbents.
In consumer industries dominated by large platform players, the winning playbook in recent years has been to partner with one or ideally multiple platforms to provide solutions for enterprise customers that the platforms were not planning (or, due to the cross-platform nature of the solutions, were not able) to provide on their own, as companies such as Sprinklr, Hootsuite and Dataminr have done.
The benefits to platforms in these partnerships include new revenue streams, new learning about their data capabilities and broader enterprise dependency on their data sets.
In concentrated industries dominated not by platforms but by a cluster of more traditional enterprises, the most successful playbook has been to offer data-intensive software or advertising solutions that provide access to incumbents’ customer data, as Palantir, IBM Watson, Fair Isaac, AppNexus and Intent Media have done. If a company gets access to the data of a significant share of incumbents, it will be able to create products and services that will be difficult for others to replicate.
New playbooks are continuing to emerge, including creating strategic products for incumbents or using exclusive data leases in exchange for the right to use incumbents’ data to develop non-competitive offerings.
Of course the best playbook of all — where possible — is for startups to grow fast enough and generate sufficiently large data sets in new markets to become incumbents themselves and forego dependencies on others (as, for example, Tesla has done for the emerging field of autonomous driving).
This tends to be the exception rather than the rule, however, which means most machine learning startups need to look to partnerships or large customers to achieve defensibility and scale.
Machine learning startups should be particularly creative when it comes to exploring partnership structures as well as financial arrangements to govern them – including discounts, revenue shares, performance-based warrants and strategic investments. In a world where large data sets are becoming increasingly valuable to outside parties, it is likely that such structures and arrangements will continue to evolve rapidly.
Perhaps most importantly, startups seeking to take advantage of the machine learning revolution should move quickly, because many top technology entrepreneurs have woken up to the scale of the business opportunities this revolution creates, and there is a significant first-mover advantage to get access to the most attractive data sets.
Tech companies have become expert at analyzing consumer shopping patterns on websites. But the next frontier is observing how people shop in old-fashioned brick-and-mortar retail stores, and a growing number of companies, from startups to giants like Facebook, are tackling the problem.
On Wednesday, xAd unveiled a new service that tracks foot traffic to real world stores and serves up the information to businesses through an online dashboard.
The company can tell when consumers walk into individual stores thanks to partnerships it has struck with more than 100,000 smartphone apps. The apps relay GPS location information, which xAd aggregates and makes anonymous to measure and analyze who is shopping at different stores. (xAd says it works with its app partners to ensure that all the location data it collects is done with the necessary permissions from users.)
The new service, called MarketPlace Discover, has been tested by Taco Bell and several other major brands, according to the company.
Facebook is also looking to bridge the gap between offline shopping and its trove of ad-targeting data. On Tuesday the company announced new features to let retailers provide maps to their stores within the ads that appear on the social network. Facebook will also be able to measure the number of people who actually visit the stores using its location features.
Slowly but surely, cyber security is evolving from the days of castles and moats into the modern era of software driven business. In the 1990s, after several failed attempts to build secure operating systems, the predominant security model became the network-perimeter security model enforced by firewalls. The way it works is clear: Machines on the inside of the firewall were trusted, and anything on the outside was untrusted. This castle-and-moat approach failed almost as quickly as it began, because holes in the wall had to be created to allow emerging internet services like mNews, email and web traffic through.
Artificial intelligence will replace large teams of tier-1 SOC analysts who today stare at endless streams of threat alerts.
With a security wall that quickly became like Swiss cheese, machines on both sides were still vulnerable to infection and the antivirus industry emerged to protect them. The model for antivirus then and now is to capture an infection, create a signature, and then distribute it widely to “immunize” other machines from getting infected by the same malware. This worked for vaccines, so why not try for cyber security?
Fast-forward to 2016, and the security industry hasn’t changed much. The large security companies still pitch the castle-and-moat model of security — firewalls and signature-based detection — even though employees work outside the perimeter as much as inside. And in spite of the fact that most attacks today use one-and-done exploit kits, never reusing the same malware again. In other words, the modern work force coupled with modern threats has rendered traditional security techniques obsolete.
Software is eating security
While most enterprises today still employ these dated security techniques, a new model of security based on artificial intelligence (AI) is beginning to take root in organizations with advanced security programs. Necessity is the mother of invention, and the necessity for AI in security became obvious when three phenomena emerged: (1) The failure of signature-based techniques to stop current threats; (2) the voluminous amounts of security threat data; and (3) the scalability challenges in addressing security threat data with people.
“Software is eating the world,” the noted venture capitalist Marc Andreessen famously said in 2011 about such obvious examples as Amazon, Uber and Airbnb disrupting traditional retail and consumer businesses. The security industry is ripe for the same kind of disruption in the enterprise space, and ultimately in the consumer product space. Artificial intelligence will replace large teams of tier-1 SOC analysts who today stare at endless streams of threat alerts. Machines are far better than humans at processing vast amounts of data and finding the proverbial needle in the haystack.
Artificial Intelligence is experiencing a resurgence in commercial interest because of breakthroughs with deep learning neural networks solving practical problems. We’ve all heard about IBM’s Watson winning at “Jeopardy,” or making difficult medical diagnoses by leveraging artificial intelligence. What is less well known is that Watson has recently undergone a major deep learning upgrade, as well, allowing it to translate to and from many languages, as well as perform text to speech and speech to text operations flawlessly.
Many of us interact with deep learning algorithms unwittingly when we see TV show and movie recommendations on Netflix based on what we’ve viewed previously or when your Mac properly identifies everyone in a picture uploaded from your phone. Or when we ask Alexa a question and Amazon Echo gives an intelligent response — likewise for Cortana and Siri. And one of the most hotly debated topics in machine learning these days is self-driving cars, like Tesla’s amazing Model S.
Deep learning allows a machine to think more like a human. For instance, a child can easily distinguish a dog from a cat. But to a machine, a dog is just a set of pixels and so is a cat, which makes the process of distinguishing them very hard for a machine. Deep learning algorithms can train on millions of pictures of cats and dogs so that when your in-house security camera sees the dog in your house, it will know that it was Rover, not Garfield, who knocked over the vase.
With deep learning, today’s next-generation security products can identify and kill malware as fast as the bad guys can create it.
The power of deep learning becomes clear when you consider the vast speed and processing power of modern computers. For instance, it takes a child a few years to learn the difference between a house cat and a dog. And if that child grew up to be a cat “expert,” it would take Gladwell’s 10,000 hours to become a feline whisperer. The amount of time it takes to expose a human to all of the training data necessary to classify animals with near perfection is long. In contrast, a deep learning algorithm paired with elastic cloud computing resources can consume hundreds of millions of samples of training data in hours, to create a neural network classifier so accurate and so fast that it would outperform even the most highly trained human experts.
What’s more fascinating than this new technology allowing machines to think like a human, is allowing machines to act like a human. Since the 1950s, we’ve been fascinated with the notion that robots might one day be able to think, act and interact with us as our equals. With advances in deep learning, we’re one giant step closer to that reality. Take the Google Brain Team’s DeepDream research, for instance, which shows that machines trained in deep learning can create beautiful pieces of art, in a bizarre form of psychedelic machine “dreaming.” For the first time, we see incredible creativity from machines because of deep learning, as well as the ability to make decisions with incredible accuracy.
Because of this ability to make classification decisions with incredible accuracy, deep learning is leading a renaissance in security technologies by using the technology to identify unknown malware from benign programs. Like the examples above, this is being done by training the deep learning neural networks on tens of millions of variants of malware, as well as on a representative sample of known benign programs.
The results are industry-changing, because unlike legacy security products that provided protection either through prior knowledge of a threat (signature-based) or via segmentation and separation, today’s next-generation security products can identify and kill malware as fast as the bad guys can create it. Imagine a world where security technologies actually enable more sharing rather than less, and allow a more open approach to data access rather than restrictive. This is the direction deep learning is allowing us to go.
Are you ready?
Disruption is clearly coming to the security space. The market has been waiting for better technology that can keep pace with the fast-evolving adversarial threat. Breakthroughs in deep learning artificial neural networks are now stopping attacks previously unseen in real time before they even have a chance to run. It’s time to get on-board with a new generation of technology that is disrupting traditional castle-and-moat security models.
Tata Communications launched the 2016 F1 Connectivity Innovation Prize, focusing on how virtual reality (VR) or augmented reality (AR) technologies could be used to make the sport more immersive for fans, and help the teams work more effectively together in the run-up to and during each Grand Prix.
The aim of the $50,000 prize is to inspire fans worldwide to harness their technical know-how and passion for F1 racing to drive innovation in the sport through two technology challenges.
Tata Communications is the Official Connectivity Provider of Formula 1, enabling the sport to seamlessly reach its tens of millions of fans across the globe.
The first challenge, set by Formula One Management, calls on technology enthusiasts to develop a solution that uses VR and AR to enable fans at home to experience a Grand Prix virtually.
The solution should allow fans who are not at the live event to immerse themselves into the exhilarating world of F1 racing – from the pit lane and the Formula One Paddock Club, to the drivers’ parade and the starting grid formation.
“We want to give as many fans as possible the opportunity to experience first-hand the thrill of a Grand Prix – and VR or AR could enable us to do just that,” said John Morrison, CTO of Formula One Management and one of the judges.
“These technologies represent the next big innovation opportunity for the sport. In the not-too-distant future, they could enable fans to get virtually transported to a Grand Prix, complementing and enriching the race experience,” said Morrison.
Julie Woods-Moss, Tata Communications’ CMO and CEO of its NextGen Business, said that in the last two years, the F1 Connectivity Innovation Prize has grown into a major platform for showcasing the huge potential of data and superfast connectivity in boosting F1 teams’ competitiveness, and in bringing fans closer to the sport
“We now invite fans from all over the world to share their ideas for how VR and AR could take fan engagement to the next level,” she said.
The market for augmented and virtual reality technology continues to heat up, and now one of the more promising startups making both AR hardware and software has raised a $50 million round to keep up the pace.
Meta, which makes an AR headset/glasses of the same name, as well as software to run on it, has raised $50 million in a Series B round of funding. The company plans to use the money to continue building out its technology, developing apps, expanding into new markets like China, and working on the next generation of its headset, the Meta 3 — according to a short statement announcing the round. The news comes just ahead of the E3 gaming conference kicking off this week, where we may see yet more AR and VR news emerge.
This latest round includes investments from Horizons Ventures Limited (which led its $23 million Series A round), as well as a list that includes several strategic backers with several specifically out of China: Lenovo, Tencent, Banyan Capital, Comcast Ventures, and GQY.
Meta is not disclosing its valuation, but filing documents provided to us by VC Expertspoints to a valuation of up to $307 million post-money for this latest round (the actual valuation depends on how many of the authorized preferred as well as common shares were issued). The Series B originally started as a $40 million round and then expanded before it closed.
Meta was founded in 2012 and is based out of Redwood City, but also has an R&D operation in Israel, where its founders hail from originally.
Many VR and AR companies tend to focus on the software end of the spectrum, developing content and technology to produce more engaging and realistic (and potentially less nauseating) experiences not just for smartphones and other screens but newer products like the Oculus Rift, Samsung VR and HTC Vive — devices that appear to be taking a lead in this still-nascent market to tap into more immersive games and other consumer media, as well as more practical enterprise applications.
Some of the most interesting of that group of software startups are getting snapped up by companies that want to make a mark in this area.
Meta is taking a different route: a vertically integrated approach in which it is using its own software development (which is heavy on computer vision, machine learning, and AI based on neuroscience) that works on hardware of its own design, which lets you immerse yourself in virtual situations that are embedded in real environments, giving you the ability to manipulate the virtual elements with gestures and other hand movements.
Taking the vertical route a road less travelled, but not entirely unpopulated. In addition to the likes of Facebook-owned Oculus, apparently Magic Leap — which is still in stealth but nonetheless valued at $4.5 billion after its last round — is also building its AR approach end-to-end, and from the ground up.
Interestingly, the investors think that Meta, despite its far more modest fundraising, could give Magic Leap a run for its money.
“In our view, Meta has built a world-class team,” said Bin Yue, Founding Partner of Banyan Capital, in a statement. “Meta is probably the only startup which has the capabilities to compete with giant companies’ projects like Microsoft Hololens and Magic Leap.”
Back when Meta was more of an idea than a publicly available product, I met Meron Gribetz, Meta’s CEO, for a demo of its prototypes and saw that he had an incredibly focused and singular vision of how he wanted to develop the company. The headset they were working on, he said at the time, was something they wanted to be easy enough to use that it could be attainable by the mass market. That was years ago, and so it’s great to see them coming along so far.
“It is incredibly gratifying to have the support of big thinkers and investors who understand the importance of creating a new human-computer interface, anchored in science. Our… investors really get what we’re doing and why Meta is different from the other players in AR,” he said in a statement today. “They understand that the combination of our advanced optical engines along with our neuroscience-based interface design approach are what will create a computing experience that is 100 times easier to use and more powerful than traditional form factors.”
Meta’s funding is a sign of how investors are keen to get in early in what is still far from a mainstream industry, but also a mark of how no one is quite sure which way it will develop.
“Augmented reality represents a transformational platform for communication, collaboration and how individuals will work in the future,” said Michael Yang, Managing Director at Comcast Ventures, in a statement. “Meta’s platform enables a host of new ways to conduct business across a wide array of industries. We look forward to supporting Meta as our first investment in the AR market.”
While several of the investors in this round are based out of China, the GQY involvement in particular will see Meta making some significant inroads to China.
“Through the investment in Meta, GQY is looking to bring the best-in-class Augmented Reality applications to China,” said Jier Yuan, VP, North America, GQY, in a statement. “This goal will be achieved by leveraging Meta’s leading-edge AR hardware, software and GQY’s in-depth knowledge and relationships in industrial training, public transportation and education sectors in China.”
The health care industry is turning to high tech to help consumers think healthy.
Even with hacking threats and privacy breaches everywhere, technology and health companies are using connected health — an emerging field that links patients and doctors remotely — to boost health care analysis and diagnoses.
With that in mind, the AT&T Foundry for Connected Health opened last week, with a goal to use the internet of things, another hot technology field, to innovate the health care space.
AT&T’s Foundry which resides inside Texas Medical Center’s Innovation Institute in Houston, is currently developing technology like a connected wheelchair to monitor patients in real-time. The company is also working on an electroencephalogram headband, a vital signs monitoring device, to detect patient discomfort.
Chris Penrose, senior vice president of AT&T’s Internet of Things division, said that by connecting things that haven’t been connected before, caregivers and doctors will have the ability to better monitor patients. They can also improve overall patient life, both at home and at health care facilities.
“This is a real way we can bridge together what you’re doing in your home with the health care ecosystem to provide a better experience for that patient,” Penrose told CNBC’s “Closing Bell“.
The overall connected health market is expected to see huge growth in the coming years. A 2015 report by MarketResearch.com, estimated the health care internet of things is poised to hit $117 billion within the next several years.
Robert Graboyes, senior research fellow at the Mercatus Center at George Mason University, predicts connected health care will be a dominate form of medicine in a few years — especially when looking at millennials who are comfortable dealing with electronic devices, he said.
“There is a convergence of technology that is opening up — big data, artificial intelligence — and it’s allowing doctors to identify patterns in health that wouldn’t have been available to intuitive practitioners,” Graboyes said.
Although connected health is a growing industry, with things like artificial intelligence and robotics entering the realm and building excitement, the overall idea is not a new phenomenon.
Analyst Tom Carroll, managing director at Stifel, said that health information technology has been through many cycles. Those precursors have spurred developments like electronic data and record keeping that are hallmarks of the health industry.
Carroll added that recent advancements in technology makes today feel like another revolution in the health care space.
Dr. Steven C. Garner, chairman of radiology at New York Methodist Hospital, told CNBC that connected health innovations are not only beneficial for collecting and reporting data, they can also be beneficial for the hospital, helping cash-strapped institutions to save money.
Garner said health tech in hospitals actually saves the hospital money, because doctors can better monitor patients and get second opinions from other medical professionals who may not be physically in the building. That ultimately can lead to quicker discharge times for patients.
The use of some robotics in surgery, Garner said, can also cut down on complications. He noted some doctors will use robots that can perform surgical tasks that result in less bleeding and fewer complications for certain surgeries.
“The accuracy of the technology can help cut down on a lot of problems,” Garner said.
In advertising, how often do we get a chance to explore something completely new, where no rules apply and where the experience needs to be imagined from start to finish? Telling a story, selling a product and leading a user inside a VRad environment was previously uncharted territory.
While exploring this new medium, we quickly realized that VR holds a huge opportunity for all types of advertisers — if they understand how to harness it.
Despite knowing what kind of experience we want to provide when designing a VR ad, we learned that doing so has its fair share of challenges. For example, we discovered that images or content placed at the bottom or top of the VR ad tend to warp, and we learned quickly to keep those areas for the background image only. Despite the challenges, especially when it came to finding the right design elements, it has been a fascinating process from Day One.
Naturally, it is still very early in the evolution of this new medium, and it may take a bit of time for it to reach the masses. But, these early VR ad experiments show that this technology could be the holy grail for marketers and brand advertising in terms of unparalleled brand engagement and a whole new level of interactivity and awareness.
Let’s dive deeper into why VR could be a huge deal for brands.
Firstly, the option to immerse a user in a brand, or a brand message, is something that we simply couldn’t do before. On TV, online or on mobile, there is still a barrier in the form of the physical device screen between the ad and the user.
With virtual reality, we have a tool that can turn into an incredibly powerful selling channel.
In virtual reality, we can engulf the user in the brand, and place them in practically any scenario that we imagine. Promoting new basketball sneakers? Put the user in the shoes of the best basketball player in the world during a game at Madison Square Garden. The sensation of true presence can only be produced in VR — and all of our senses react to it. This capability is incredibly powerful for any ad campaign.
Secondly, the ability to track, analyze and understand if and how a message made its way to the user is more in-depth and detailed in VR than in any other medium. On TV, we can get a general sense if a user saw the ad; on the web, we are able to track clicks and post-click activity; mobile allows us to track ad activity based on location and device.
With VR, we will be able to track where the user is actually looking within the ad environment we’ve built. Moreover, we’re making strides in tracking and analyzing real human emotions that are experienced inside the VR environment, adding an incredibly valuable and powerful layer to analytics and tracking.
Lastly, interactivity and user engagement inside VR goes way beyond what’s currently available on other platforms. When the user can feel as though they are a real part of an ad and actually interact, touch and play, they are able to engage with the product on a whole new level. While we have seen ad interactivity begin to emerge in online and mobile ads, they are still missing the crucial element that only VR can offer — letting the user exist within the ad itself. With virtual reality, we have a tool that can turn into an incredibly powerful selling channel.
It’s certainly not only brands and advertisers that can benefit from VR. Virtual reality and the entire VRecosystem has a lot to gain from top advertisers and brands entering the industry, bringing with them a lot of spending that can drive a VR-based “free-to-play” economy. This will allow VR publishers to create amazing, top-quality content, monetize it with gorgeous, interactive ads and distribute it for free.
Free content, and, most importantly, quality content, will be the driving force for mass consumer adoption ofVR. If all apps, games and experiences are behind a paywall, it will hinder VR adoption and deter people from testing and exploring this new medium.
With VR, we have a win-win situation. Brands will gain access to what is potentially the most powerful advertising medium in history (though it will take time to learn how to do it right), and publishers can start building incredible VR experiences without burdening themselves with paid distribution and the low download counts that go along with it.
The VR industry is still working out a few kinks — like proper distribution channels. But in the very near future, this ecosystem will have all the ingredients it needs to grow — and thrive.
Living cells are capable of performing complex computations on the environmental signals they encounter.
These computations can be continuous, or analogue, in nature — the way eyes adjust to gradual changes in the light levels. They can also be digital, involving simple on or off processes, such as a cell’s initiation of its own death.
Synthetic biological systems, in contrast, have tended to focus on either analogue or digital processing, limiting the range of applications for which they can be used.
But now a team of researchers at MIT has developed a technique to integrate both analogue and digital computation in living cells, allowing them to form gene circuits capable of carrying out complex processing operations.
The synthetic circuits, presented in a paper published today in the journal Nature Communications, are capable of measuring the level of an analogue input, such as a particular chemical relevant to a disease, and deciding whether the level is in the right range to turn on an output, such as a drug that treats the disease.
In this way they act like electronic devices known as comparators, which take analogue input signals and convert them into a digital output, according to Timothy Lu, an associate professor of electrical engineering and computer science and of biological engineering, and head of the Synthetic Biology Group at MIT’s Research Laboratory of Electronics, who led the research alongside former microbiology PhD student Jacob Rubens.
“Most of the work in synthetic biology has focused on the digital approach, because [digital systems] are much easier to program,” Lu says.
However, since digital systems are based on a simple binary output such as 0 or 1, performing complex computational operations requires the use of a large number of parts, which is difficult to achieve in synthetic biological systems.
“Digital is basically a way of computing in which you get intelligence out of very simple parts, because each part only does a very simple thing, but when you put them all together you get something that is very smart,” Lu says. “But that requires you to be able to put many of these parts together, and the challenge in biology, at least currently, is that you can’t assemble billions of transistors like you can on a piece of silicon,” he says.
The mixed signal device the researchers have developed is based on multiple elements. A threshold module consists of a sensor that detects analogue levels of a particular chemical.
This threshold module controls the expression of the second component, a recombinase gene, which can in turn switch on or off a segment of DNA by inverting it, thereby converting it into a digital output.
If the concentration of the chemical reaches a certain level, the threshold module expresses the recombinase gene, causing it to flip the DNA segment. This DNA segment itself contains a gene or gene-regulatory element that then alters the expression of a desired output.
“So this is how we take an analogue input, such as a concentration of a chemical, and convert it into a 0 or 1 signal,” Lu says. “And once that is done, and you have a piece of DNA that can be flipped upside down, then you can put together any of those pieces of DNA to perform digital computing,” he says.
The team has already built an analogue-to-digital converter circuit that implements ternary logic, a device that will only switch on in response to either a high or low concentration range of an input, and which is capable of producing two different outputs.
In the future, the circuit could be used to detect glucose levels in the blood and respond in one of three ways depending on the concentration, he says.
“If the glucose level was too high you might want your cells to produce insulin, if the glucose was too low you might want them to make glucagon, and if it was in the middle you wouldn’t want them to do anything,” he says.
Similar analogue-to-digital converter circuits could also be used to detect a variety of chemicals, simply by changing the sensor, Lu says.
The researchers are investigating the idea of using analogue-to-digital converters to detect levels of inflammation in the gut caused by inflammatory bowel disease, for example, and releasing different amounts of a drug in response.
Immune cells used in cancer treatment could also be engineered to detect different environmental inputs, such as oxygen or tumor lysis levels, and vary their therapeutic activity in response.
Other research groups are also interested in using the devices for environmental applications, such as engineering cells that detect concentrations of water pollutants, Lu says.
Ahmad Khalil, an assistant professor of biomedical engineering at Boston University, who was not involved in the work, says the researchers have expanded the repertoire of computation in cells.
“Developing these foundational tools and computational primitives is important as researchers try to build additional layers of sophistication for precisely controlling how cells interact with their environment,” Khalil says.
The research team recently created a spinout company, called Synlogic, which is now attempting to use simple versions of the circuits to engineer probiotic bacteria that can treat diseases in the gut.
The company hopes to begin clinical trials of these bacteria-based treatments within the next 12 months.
MIT chemists have devised a new way to synthesize a complex molecular structure that is shared by a group of fungal compounds with potential as anticancer agents. Known as communesins, these compounds have shown particular promise against leukemia cells but may be able to kill other cancer cells as well.
The new synthesis strategy, described in the Journal of the American Chemical Society, should enable researchers to generate large enough quantities of these compounds to run more tests of their anticancer activity. It should also allow scientists to produce designed variants of the naturally occurring communesins, which may be even more potent.
“This is just the foundation,” says Mohammad Movassaghi, an MIT professor of chemistry and the paper’s senior author. “We’ve laid the foundation for implementation of this strategy to access other variations, both natural and nonnatural.”
Communesins are a unique family of polycyclic and complex naturally occurring alkaloids. One of the major hurdles to synthesizing communesins in the lab using this new strategy is a chemical reaction in which two large, bulky molecules must be joined together in a step known as heterodimerization.
Movassaghi’s lab, which has been working on this type of synthesis for several years, was inspired by the way related compounds are produced in nature. The details of the natural synthesis are not fully known, but it is believed that it also involves a heterodimerization step. In fungi, there is evidence that an enzyme catalyzes this reaction.
Without an enzyme, the heterodimerization required to produce communesins is difficult to carry out because it requires forming a bond between two carbon atoms that are each already bonded to four other atoms, some of which have additional bulky groups attached to them. This makes it challenging to bring the two molecules close enough for them to fuse together.
To overcome this, Movassaghi’s lab developed an approach in which they transform the two carbon atoms into carbon radicals (carbon atoms with one unpaired electron). To create these radicals, the researchers first attach each of the targeted carbon atoms to a nitrogen atom, and these two nitrogen atoms bind to each other.
When the researchers shine certain wavelengths of light on the reactants, it causes the two atoms of nitrogen to break away as nitrogen gas, leaving behind two very reactive carbon radicals that join together almost immediately.
“If you break the carbon-nitrogen bond, the intermediate has a very short lifetime. We predict it to be roughly on the order of picoseconds,” Movassaghi says. “Dinitrogen pops out and now you have two radicals in very close proximity.”
Once the heterodimer is formed, three more chemical steps are required, including the transfer of a nitrogen-containing chemical group from one carbon atom to another.
“Just heterodimerizing is only half the battle,” Movassaghi says. “There were two major challenges in this successful synthesis. One was how do you get to a heterodimer, and once you fuse the two halves together, how do you guide the rearrangement to match the structure that you find in nature?”
In this study, the MIT team prepared a key precursor that was converted to the compound known as communesin F in only five steps. The critical heterodimer rearrangement step proceeded to yield 82 percent of the desired heptacyclic communesin structure.
Scott Miller, a professor of chemistry at Yale University, describes the new approach as “a masterful synthesis.”
“The strategy is incredibly ambitious and reflects a sophisticated assessment of the plausible biosynthetic precursor. This is really very clever, since these pathways are typically not known at the level of complete understanding, so outstanding intuition and creativity are required,” says Miller, who was not involved in the research.
This strategy can also be used to produce related communesins, including variants not found in nature.
“Nature has likely evolved these compounds for chemical defense or signaling between different organisms, but if we’re thinking about their potential for treatment of human disease, we may need to access nonnatural derivatives,” Movassaghi says. “Our ability to go in with pinpoint accuracy and make structural variations to these complex alkaloids is going to be helpful in enabling the thorough evaluation of these compounds and related derivatives.”
The study was conducted by graduate student Matthew Pompeo and former postdocs Stephen Lathrop and Wen-Tau Chang. The project was funded by the National Institutes of Health and the National Science Foundation.