Opportunities for Investing in Artificial Intelligence (AI)

Viewpoints

October 26, 2018

TCW believes Artificial Intelligence (AI) will be the foundational technology of the information age. We believe that AI is a paradigm-shifting technology for the global economy and has very broad and deep implications. For one, AI could be a key catalyst to drive productivity growth higher around the world as more companies look to automate their processes to boost productivity. Accenture, a global IT Services firm, in its white paper, “Why Artificial Intelligence is the Future of Growth,” said AI could boost labor productivity by up to 40% over the next 15-20 years.

We believe that AI makes it possible for machines to learn from experience, adjust to new inputs and ultimately perform human-like tasks. This leap in computing from humans telling computers how to act to computers learning how to act has significant implications across almost every industry. These include

  • Industrial/Manufacturing: factory automation efforts to increase human capital productivity
  • Media: movie/content recommendations; optimize movie cast
  • Healthcare: accelerate the search for new drugs against cancer; robot-assisted surgery, automated image diagnosis
  • Financial Services: prevent fraudulent transactions; virtual advisors, algorithmic trading
  • Automotive/Transportation: self-driving vehicles
  • Energy: improve overall energy efficiency; smart cities
  • Agriculture: improve agriculture productivity

AI is the Next Phase in Disruption Evolution

Source: FundStrat

TCW believes AI could be as disruptive (or potentially more disruptive) than the industrial revolution, as the global economy becomes more digitized and technology becomes more pervasive. As a result, we believe AI provides a tremendous growth opportunity over the next several decades, and in our view, companies in the AI ecosystem will have higher growth rates and higher profitability than companies that are more marginal AI players. Furthermore, we believe AI is here today as leading companies are using AI as a competitive advantage already. In conclusion, our outlook for AI is very promising and we believe the leading AI companies will see higher growth rates, greater profitability, and ultimately higher equity valuations than those that are lagging in this area.

We Believe Artificial Intelligence Is Compelling Now

History of AI – what is it and why is it happening now?

So what is Artificial Intelligence?

AI is a set of algorithms and techniques that allow computers to mimic human intelligence. The concepts of AI have been around a long time, going back to the 1950s and 60s. During the dawn of computing in the late 1940s/early 1950s, computer pioneer and AI theorist, Alan Turing, was already grappling with the question, “Can machines think”? In fact, Turing wrote a report for the National Physical Laboratory, entitled “Intelligent Machinery,” which contained an early discussion of neural networks. John McCarthy, an American computer scientist and often referred to as “the father of AI,” played a key role in the development of intelligent machines. In 1959, he co-founded the MIT Artificial Intelligence Laboratory and later founded Stanford’s AI Laboratory, known as SAIL, serving as its director from 1965 to 1980. McCarthy believed that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” However, this would prove to be a challenge for McCarthy, Turing, and others as they were truly ahead of their time in terms of thinking of machine intelligence. That said, much of their work has become foundational for what has transpired today. For example, McCarthy created the Lisp computer language, which became the standard AI programming language for quite some time. Not until very recently was there enough computational power to make these concepts work.

History

From a very high level, AI is composed of multiple facets. These include the following:

  • Reasoning: Internalize and analyze data information through neural networks, deep learning and ensemble learning
  • Knowledge representation: Utilize knowledge to make decisions and take action
  • Planning and navigation: Provide predictions and insights to achieve desired outcome
  • Natural language processing: Voice and text interactions to elucidate meaning and context
  • Perception: Replicate human senses to identify discrete steps and applications for a task
  • Generalized intelligence: Emotional intelligence, creativity, moral reasoning and intuition

From a very simplistic perspective, the easiest way to think of the relationship between AI, machine learning, and deep learning is to think about them as concentric circles within AI, as shown in the figure below. AI came first and is the broad concept of machines being able to carry tasks with some level of human intelligence. Examples of general tasks include planning, understanding language, recognizing objects and sounds, learning and problem solving. This was pioneered from the likes of Alan Turning, John McCarthy and others as mentioned.

Machine learning (ML) is a subset of AI and based around the idea that machines should be able to learn themselves by just giving them access to data. Arthur Samuel, an American computer scientist, actually coined the term “machine learning” defining it as “the ability to learn without being explicitly programmed.” Rather than write millions of lines of code with complex rules and decision trees, machine learning is a way of training an algorithm so that it can learn how to make accurate decisions. This type of training requires feeding the algorithm with huge amounts of data and allowing the algorithm to adjust itself and improve. For example, machine learning has been used to make great improvements to computer vision (ability of a machine to recognize an object). By feeding an algorithm enough pictures, a machine with the proper ML algorithm can learn with high accuracy what a cat or any other object looks like.

 Traditional Programming

Machine Learning 

     

Today, the exponential growth in AI has led to the creation of very established AI frameworks created by some of the largest AI companies in the world (Amazon, Facebook, Google, Microsoft, etc). These frameworks are basically software abstraction layers that provide a standard way for users to build and deploy various applications. The benefit of using an off-the-shelf framework is to make it easier for developers to focus on developing applications, solving the problem at hand and not be tied down with cumbersome software environmental issues. NVIDIA’s CUDA computing platform works with all of the leading AI frameworks and is a key reason its GPU has essentially become the de-facto AI computing platform of choice. Examples of the most popular frameworks include:

  • Apache MXNet: Amazon and NVIDIA are the key driving forces behind MXNet. This framework works particularly well in AWS environments, where a scalable deep learning architecture is required.
  • Caffe: Caffe is an independent framework developed at the University of California, Berkeley. It is open-sourced and supported by a large, global network of contributors. Its members largely consist of those from the academic domain, and its applications typically focus more on deep learning applications such as image classification.
  • Caffe2: Facebook announced Caffe2 in April 2017, which included new features such as Recurrent Neural Networks. In March 2018, Caffe2 was merged into PyTorch.
  • Microsoft Cognitive Toolkit: This is a deep learning framework developed by Microsoft.
  • PyTorch: This is an open-source Python-based machine learning library that was built to provide flexibility as a deep learning development platform. This was developed primarily by Facebook.
  • TensorFlow: This is Google’s open-source library of tools and software that help accelerate the development process of machine learning applications.

Machine learning algorithms can essentially be divided into three general categories – supervised learning, unsupervised learning, and reinforcement learning. Supervised learning attempts to infer a function from labeled training data that consists a set of training examples. Suitable applications would include image classification, fraud detection, and weather forecasting. Unsupervised learning is the task of trying to discover implicit relationships in a given unlabeled dataset. Since the data here is unlabeled, there is no error signal to evaluate a potential solution. Clustering algorithms are the most common forms of unsupervised learning. This involves grouping a set of objects that are more similar to each other.

Deep learning takes an even narrower approach to AI (called “Narrow AI”), as this is based on a concept called Artificial Neural Networks (ANN). ANN is a computing system or set of algorithms that mimics the biological structure of the brain, namely the interconnection of neurons, and thus classifies information in the same way a human brain is able to. In ANN, neurons have discrete layers and connections to other neurons. Each layer is associated with a specific feature to learn, such as curves/edges in image recognition. It is this layering that gives deep learning its name. Depth is created by using multiple layers as opposed to a single layer. Self-driving is one of the most common deep learning applications but there are others as well, such as image classification on Pinterest, face recognition on Facebook, voice recognition from Amazon Alexa, cancer diagnostics, etc.

Traditional computers “think” very differently than the human brain. The transistors in a typical computer (leading chips are now approaching 20 billion transistors or more) are wired in relatively simple manner of serial chains (two or three connections) whereas the neurons in a human brain are densely interconnected in a complex, parallel manner (up to 10,000 connections with other neurons). With the GPU, neural networks can take advantage of the concept of parallel processing that the GPU does so well. So now a computer can be created in a dense manner similar to the human brain!

Source: https://www.quora.com/How-do-artificial-neural-networks-work#

The way a neural network works is that it basically tries to simulate a large number of densely interconnected brain cells inside a computer so one can get the computer to learn things such as recognizing patterns and making decisions in a human like manner. A typical neural network has anywhere from a few dozen to several million artificial neurons called units or nodes that are all arranged in a series of layers, each of which is interconnected with the other layers, as shown in the diagram to the right. The three layers shown in the diagram to the right are designated the input layer, hidden layer, and output layer. The units/nodes have values ranging between 0.0 and 1.0 and are linked by connections which have a weight designated “w” in the diagram to the right. Signal values than traverse from the input layer through the connection weights to the hidden nodes and then through more connection weights to the output nodes.

As shown on the left of the diagram, the neural network would have a set of input nodes labeled “x” from 1 to n (training data) and also a set of output nodes designated, “y.” from 1 to l that might represent the object that the network is trying to identify. To train the network, a feedback process called backpropagation is used, which involves comparing the output of the network with the output it was meant to produce. Depending on the difference in the values, the network modifies the weights of the connections between the nodes in the network, working from the output nodes through the hidden nodes to the input nodes (essentially going backwards). This process continues until the difference between the actual and intended output is effectively zero.

Once the network is trained with enough learning examples, it reaches a point where it can make the correct assessment with an entirely new set of inputs. For example, suppose you have been teaching a network by showing it lots of pictures of cars and bicycles. The inputs to the network are essentially binary numbers. Say you had five inputs that were based on the following questions: 1) Does it have two wheels?; 2) Does it have four wheels?; 3) Does it have one seat?; 4) Does it have handlebars?; 5) Does it have an engine? A typical bike would have the answers yes, no, yes, yes, no or 10110. So during the learning phase, the neural network is simply looking at a series of numbers and learning that some inputs mean a car, which might be an output of 1, while others mean a bicycle, which translates into an output of 0.

To increase the performance of the neural network one can increase the number of hidden layers, increase the number of units or neurons or provide more data. However, as you do this, more computational power is required to run these neural network algorithms so there is a trade-off that the user must make.

Natural Language Processing (NLP) is a form of machine learning and refers to the study and development of computer systems that can interpret speech and text as humans naturally speak and type it. However, human communication is vague at times as humans often use colloquialisms and abbreviations, which make computer analysis of natural language challenging. That said, in the last decade NLP as a field has made immense strides.

Source: TCW, NVIDIA Investor Presentation

Why is AI happening now? AI has been inflecting higher over the past few years largely due to three factors:

  1. More affordable and powerful computer processing power primarily through NVIDIA’s Graphic Processing Unit (GPU), which makes it easier to deploy the concept of parallel processing on a wider scale. Parallel processing is useful for AI/Deep Learning applications such as image recognition because it allows users to make many calculations simultaneously. NVIDIA’s Volta GPU processor can identify over 900 images per second, which compares to five images per second by a conventional CPU. This amounts to a speed of 180x faster. This is leading to an explosion in deep learning AI applications from image recognition to cancer identification to selfdriving cars.
  2. The explosion of data. As the world becomes more digitized (Gartner estimates that the vast majority of the world will be digitized over the next 10 to 20 years), we are beginning to see an explosion of useful data to analyze. Data storage has become inexpensive, and with Flash memory, it has become quicker to access. With faster networking speeds (10/40G to 100/200/400G), it has become much easier to perform tasks in real-time
  3. Cloud Computing. Lastly, to aggregate all of this data in a central location, companies such as Amazon, Apple, Facebook, Google, and Microsoft are deploying cloud computing networks. For example, with home products such as the Amazon Echo, most of the computational work occurs in Amazon’s data centers, not in the Echo device itself.

Source: NVIDIA Investor Presentation

More Data Analyzed = Better Quality Artificial Intelligence

Source: LAM Research Investor Presentation

The Rising Data Economy

Source: LAM Research Investor Presentation

As a result of these three factors, this has led to an AI renaissance as many researchers are now able to take advantage of many of the AI algorithms developed over the past 20-30 years. Computers can now be trained to think like a human and with high speed communications, AI can be deployed in real-time. We envision AI progressing over time towards independence through the following four levels:

  • Level 0: Simple Programs – Basic applications of technology involving timing, application or action to complete a task, e.g. dishwasher.
  • Level 1: Basic Artificial Intelligence – Utilize input to determine and compute corresponding output, e.g. classic puzzle-solving games.
  • Level 2: Machine Learning/Deep Learning – Decision making based on existing information or algorithm, extract rules and decision making standards based on sample data.
  • Level 3: Independent Generation Artificial Intelligence – Decision making based on self-identifying target data within a dataset and learning, further generating programs and advancements itself. Adversarial machine learning would fall into this category, where AI systems are working together and teaching each other. The benefit of adversarial learning is it is able to generate its own set of data. Driverless cars are one example here. In fact, NVIDIA’s Drive Constellation Simulation System is a loose application of adversarial learning where it is using simulation software to create data for “corner cases” such as for a tornado, hurricane, or a small child suddenly running into the street to retrieve a ball.

In conclusion, our outlook for AI is very promising and thus, we believe the current investment backdrop for this technology is very compelling. In our view, it has the potential to impact a number of key “daily life” areas such as labor, entertainment, mobility, safety, and convenience. Our key investment AI themes are as follows:

  • Advancements in computing power and data storage resources complement and facilitate new and widespread application of AI software
  • AI spurs chain-reaction in development and advancements in hardware and software to strive for increased efficiency, automation, consistency, speed and scale
  • Technological automation of tasks across sectors relying on analyses, judgments and problem solving through learning, speech recognition and visual perception
  • AI increases efficiency and consistency through the use of virtual tools by addressing challenges in real-time as new data arrives
  • New threats to the competitive advantages of businesses across sectors have emerged and businesses are rapidly recognizing the need to increase their investment in artificial intelligence technologies to remain competitive

Finally, we believe AI could be as disruptive (or potentially more disruptive) than the industrial revolution, as the global economy becomes more digitized and technology becomes more pervasive. As a result, we believe AI provides a tremendous growth opportunity over the next several decades, and in our view, companies in the AI ecosystem will have higher growth rates and higher profitability than companies that are more marginal AI players. Furthermore, we believe AI is here today as leading companies are using AI as a competitive advantage already. In summary, our outlook for AI is very promising and we believe the leading AI companies will see higher growth rates, greater profitability, and ultimately higher equity valuations than those that are lagging in this area.

Autonomous Driving

Perhaps the most enticing application of artificial intelligence that we hear about today is that of self-driving vehicles. Companies are pouring billions of dollars’ worth of investment into the field of autonomous driving technology as it relates to both consumer and commercial vehicles.

The industry estimates that today’s $3B Advanced Driver Assistance Systems (ADAS) market will evolve into what could be a $100B autonomous vehicle market by 2025. The growth of autonomous technology will be geographically broad-based, across a variety of subsectors within the automotive market, and will have productivity implications spanning multiple industries. The economic and social benefits of autonomous technology will serve as a key driver for development, as many expect self-driving vehicles to reduce accidents (90% caused by human error), diminish congestion, increase productivity, and contribute to overall fuel efficiency.

Autonomous Driving – History

Source: LAM Research Investor Presentation

Autonomous Driving – Alphabet Inc. (GOOGL)

Waymo, LLC, the self-driving unit of Alphabet Inc., plans to launch its robo-taxi service in Phoenix in the coming months with 600 vehicles providing daily autonomous rides. The company will initially begin with a fleet of Chrysler Pacifica minivans powered by Waymo’s artificial intelligence technology, with a goal to eventually add up to 82,000 Chrysler hybrid vans and electric Jaguar i-Pace SUVs over the next couple of years. The company has amassed over ten million miles of on-the-road testing and billions of computer simulation miles. When the robo-taxi service launches, riders will be able to use a smartphone app (similar to Uber) to order a driverless ride. Waymo has already been experimenting with first-mile, last-mile rides in which riders can have a driverless vehicle pick them up from home and drop them off at a public transit stop within Phoenix’ Valley Metro network of bus, rail, and paratransit stops.

Source: Waymo

On the commercial side, autonomous technology is also beginning to be used for package and food delivery. Domino’s is partnering with Ford Motor Co. to test run for how self-driving vehicles could operate in Miami to deliver pizzas. Domino’s has tested about 100 trips to get an idea of consumer reaction to picking up pizza from an autonomous vehicle. Early indications are that the concept is resonating well with customers. Pizza Hut has joined forces with Toyota in a similar partnership, with plans to test autonomous vehicle delivery in 2020. On the grocery side, Kroger, the largest grocery chain in the U.S., has partnered with Nuro (self-driving startup) to deliver groceries to customers in Arizona. The partnership is currently using modified Toyota Priuses for delivery during the test program.

Source: Google Image Search, Ford Press Release

Consumer Products

Consumers are beginning to see artificial intelligence permeate their lives more and more though various applications and functions. Google Maps/Waze use artificial intelligence to suggest faster routes given the vast amount of data that the company has amassed from its users. Using this data, the map applications can predict work commute times, provide expected travel times, and offer optimal route suggestions.

Social media platforms such as Facebook and Instagram use artificial intelligence to personalize newsfeeds and deliver ads geared toward specific user profiles. Facebook Inc. uses machine learning through its Applied Machine Learning (AML) and Facebook Artificial Intelligence Research (FAIR) teams to advance several aspects of the core newsfeed, including advanced translation services and video captioning services which drove a 40% increase in viewing time. Facebook also uses algorithms based on what the user likes, what the user has previously liked, and what other people and that user have commonly liked in order to determine the content that is most likely to gain that user’s interest. Instagram uses machine learning to identify context within a conversation and replace a word with appropriate emojis.

Source: Facebook

Source: Facebook

Netflix, Amazon, and YouTube have increasingly been implementing artificial intelligence within their technology to uncover more useful content recommendations for its subscriber bases. Netflix says that about 70% of the content that their subscribers watch comes off of a recommendation. The recommendation engine is constantly learning about a subscriber’s preferences based on watch history and the watch histories of those with similar profiles to the subscriber.

Source: Google Image Search

Amazon.com, Inc. uses machine learning and artificial intelligence in all aspects of its businesses, both retail and Amazon Web Services. On its retail side, algorithmic pricing changes the list price of items, sometimes thousands of times per day, tens of thousands of Kiva robots help assist human workers in fulfilling orders, and drones have started delivering orders in the U.K. and U.S. Amazon has the leading position in the next frontier of AI – voice – through its Echo devices and the Alexa platform. In cloud computing, Amazon offers a family of AI services that provide cloud-native machine and deep learning, including natural language understanding and automatic speech recognition (Lex), visual search and image recognition (Rekognition), and text-to-speech (Polly).

Source: Amazon

One of the most direct contacts end consumers are having with artificial intelligence today is through their interactions with personal assistants, which are becoming more prevalent in consumer devices such as smartphones, speakers, vehicles, watches, and computers. According to PwC’s annual Global Consumer Insights Survey (Source: https://www.pwc.com/gx/en/industries/consumer-markets/ consumer-insights-survey.html), about 10% of consumers around the world own what could be described as an “AI device” and approximately a third of all consumers say that they plan on purchasing an AI device.

Source: Google Image Search

Google’s dominant presence in the search engine market allows its AI technology to offer a wealth of information to its consumers via the Android phone, Google home, and in-vehicle platforms. The Google Assistant can integrate your Gmail account, Google calendar, and Maps to create a visual snapshot of what your daily schedule looks like. Google Assistant is also powered by conversational AI, allowing it to schedule meetings and appointments for you, as well as order products from Google Express and retailer partners such as Target. Google is also implementing various other software in consumer products and services, such as an autocomplete feature in Gmail, which uses machine learning to offer suggestions for completing sentences. The Google Photos app can suggest who a user might want to share their photos with and can automatically add color to black-and-white photos. In the latest version of the Android phone, Google uses AI to adjust screen brightness after studying a user’s manual adjustment patterns, and can even have the battery adapt to how a consumer uses apps in order to conserve energy.

Source: CB Insights

Microsoft Corp. has gained a strong presence in the AI landscape behind innovative products such as Azure (cloud computing/services), Cortana (speech-to-text, personal assistant), Wand Labs (conversation-as-a-platform), Skype (language translation through voice) and other various applications related to cognitive computing, augmented reality, and conversational bots. At J.P. Morgan’s Global Tech conference in May, Microsoft’s Chief Marketing Officer discussed implementing AI within products used on a daily basis, such as in Microsoft Office, Cognitive Services, and Microsoft Cloud.

Source: https://aws.amazon.com/mp/ai/

Healthcare

Artificial intelligence is being used for a wide variety of medical applications such as improving the diagnoses of diseases, accelerating the search for new drugs, and implementing better monitored treatment plans for patients.

Illumina has been a leading player in the use of artificial intelligence in the area of genomics. The company’s AI software can be used to distinguish genetic mutations that may lead to diseases, separating these deviations from the millions of benign variants that can be found in individuals. With the vast amount of genomic data that is now being collected, Illumina is looking to make its AI tools widely available to clinicians and researches so that they can more easily manage and analyze this stream of data. The Illumina Accelerator platform is also investing in various startups focused on genomics, such as Unite Genomics, which uses machines learning to more accurately and quickly analyze trials and therapies at a population-scale level.

The field of oncology is especially ripe for productivity improvements as clinicians often spend hours on data-mining exercises. AI and machine learning offer the ability to crunch through the data and augment the clinicians’ ability to predict the prognosis of patients, personalize treatment plans, and reduce the error rate of diagnoses. For example, currently AI software is being used to perform automated mammography analysis and replacing the significantly time-intensive task of manual cell counting.

Intuitive Surgical is the market leader for surgical robots in medicine. Intuitive Surgical’s da Vinci robotic surgeons are used in cardiac, colorectal, gynecological, head and neck, thoracic, and urologic procedures. The da Vinci systems’ applications are also moving into hernias and bariatrics surgery for weight management. The robots have participated in over five million minimally invasive surgeries to date and the total portfolio includes 44,000 trained da Vinci surgeons worldwide.

Even moving beyond just human healthcare, AI is penetrating the veterinarian market as well. IDEXX Laboratories, a large player in the animal veterinary market, uses artificial intelligence in products such as its SediVue Dx Urine Sediment Analyzer. The SediVue Dx uses a neural network that has been trained to analyze urine samples in an easy, consistent, and accurate way. The neural network leverages a database of 14 million images submitted from over 200,000 animals.

Source: Google Image Search

Financial Services

Credit card companies such as Visa and Mastercard have been using machine learning to become increasingly effective against fraudulent transaction attempts. Visa’s neural networks and self-improving algorithms have allowed the company to discover previously unknown correlations in consumer spending patterns, allowing the company to detect unusual transaction activity quickly and more accurately than before. Mastercard has been speeding up the adoption of artificial intelligence via its “AI Express” service. The AI Express program helps businesses improve fraud risk management, prevent money laundering, predict credit risk, increase operational efficiencies, and implement cyber security.

Banks have also been investing large sums of money to build out their AI capabilities and talent. JPMorgan, for example, has been using big data and predictive analytics to help businesses optimize cash flow, develop risk management practices, and discover hidden revenue opportunities. JPMorgan has said that the use of machine learning is allowing it to deliver more reliable forecasts for its business customers. JPMorgan, Bank of America, and Wells Fargo have also been introducing clients to its AI-powered assistant, which will be able to address queries and anticipate client needs as it relates to corporate payments. JPMorgan has recently hired a senior executive from Google (previously Google’s head of product management for cloud-based AI) as the bank’s head of artificial intelligence and machine-learning services. This is just a recent example of the investment banks have been making in AI talent. Goldman Sachs CEO, Lloyd Blankfein, has been pushing technology as a driver of the company’s business as well. Approximately 25% of the headcount in Goldman’s fixed income, currencies, and commodities department is comprised of technology engineers.

Source: Google Image Search

Enterprise

Artificial intelligence adoption continues to increase within enterprises, as the use cases to drive value for customers and efficiencies within business operations are becoming more and more apparent. Various surveys have pointed to increasing momentum in enterprises implementing AI going forward. According to The Economist Intelligence Unit (Source: https://www.eiuperspectives.economist.com/sites/ default/files/Artificial_intelligence_in_the_real_world_1.pdf), approximately 75% of executives say that AI will be “actively implemented” in companies within the next three years. A survey by Narrative Science (Source: https://narrativescience.com/Resource-Library/PR/artificialintelligence- ai-adoption-grew-over-60-in-the-last-year) showed that 61% of respondents had implemented AI within their businesses in 2017, compared to 38% in 2016.

Salesforce introduced Salesforce Einstein, which records interactions between companies and customers to generate AI-powered applications to improve capabilities in sales, service, and marketing. Salesforce Einstein is a leader in bringing AI solutions to business clients, as the platform delivers predictions and recommendations tailor-made for each client’s unique businesses process and end-customer needs. For example, Einstein’s database can prioritize leads and opportunities based on estimated conversion rates, allowing an enterprise’s sales team to optimize time allocation when pursuing new customers.

Source: https://bizedge.co.nz/story/inside-salesforce-einstein-ai-help-crm-work-smarter/

Splunk is another company that is driving AI use cases in enterprise applications. Splunk is a growing provider of real-time and predictive analytics software that uses machine-generated big data for functions such as searching, monitoring, visualization, and problem diagnosis. Splunk’s platform prepares and aggregates data in a more timely and efficient manner than can be performed manually by a data scientist, and iteratively adapts to changing behavior to highlight any anomalous activity. Organizations can use Splunk to remediate priority issues for risk management, reduce security investigations from hours to minutes, and automate compliance reporting allowing for reduced labor and increased accuracy.

Source: http://www.idevnews.com/stories/7106/With-Expanded-Machine- Learning-Capabilities-Across-Portfolio-Splunk-Unveils-New-Use-Cases

ServiceNow, Inc. is an enterprise software company that provides automated workflow processes for its clients. While initial deployments were for managing information technology assets, the company is now seeing use cases expand to security, human resources, and customer service. ServiceNow recently launched a machine learning engine called Intelligent Automation Engine, which will be used to forecast outages, automate routing, predict outcomes, and benchmark performance for clients. In a ServiceNow survey, 86% of companies said that they will need increased automation in order to manage the volume of work that they expect to have in 2020.

Source: https://www.businesswire.com/news/home/20170509005837/en/ServiceNow- Launches-Intelligent-Automation-Engine%E2%84%A2

Conclusion

We believe the leading companies in the industry recognize the value that AI is creating and are allocating meaningful investment dollars to build out their AI-related capabilities. The impacts of this revolution will be widespread, pervasive across various end markets, business models, and geographies. In conclusion, our outlook for AI is very promising as we believe the leading AI companies will see higher growth rates and greater profitability than those that are lagging in this area.

 


This material is for general information purposes only and does not constitute an offer to sell, or a solicitation of an offer to buy, any security. TCW, its officers, directors, employees or clients may have positions in securities or investments mentioned in this publication, which positions may change at any time, without notice. While the information and statistical data contained herein are based on sources believed to be reliable, we do not represent that it is accurate and should not be relied on as such or be the basis for an investment decision. The information contained herein may include preliminary information and/or "forward-looking statements." Due to numerous factors, actual events may differ substantially from those presented. TCW assumes no duty to update any forward-looking statements or opinions in this document. Any opinions expressed herein are current only as of the time made and are subject to change without notice. Past performance is no guarantee of future results. © 2018 TCW