Documentos de Académico
Documentos de Profesional
Documentos de Cultura
Clients should contact representatives and execute transactions through a J.P. Morgan subsidiary or affiliate in their home
jurisdiction unless governing law permits otherwise.
AI and Semiconductors
Exponential growth from AI adoption in the cloud and
at the edge
We see Artificial Intelligence (AI) as the next growth driver for the semiconductor Technology
industry, accelerating growth in the Cloud as well as in Edge devices. In this AC
Gokul Hariharan
collaborative report, we lay out key drivers for AI semis demand growth and the
(852) 2800-8564
impact on industry dynamics from rising AI adoption. We forecast AI gokul.hariharan@jpmorgan.com
semiconductor revenues (focusing on Computing demand) to record a 59% CAGR Bloomberg JPMA HARIHARAN <GO>
from 2017-22, reaching a $33bn market size. While initial demand is likely to be J.P. Morgan Securities (Asia Pacific) Limited
led by AI training workloads in the datacenter, we expect AI inference to take off AC
JJ Park
in the next 2-3 years and become the biggest semiconductor market. AI and High (82-2) 758-5717
Performance Computing (HPC) are the next catalysts of leading-edge semis jj.park@jpmorgan.com
demand, helping advanced process technology in logic semis, lifting high-end Bloomberg JPMA PARK <GO>
memory adoption and semiconductor capital equipment spending. Key stocks J.P. Morgan Securities (Far East) Limited,
under our global semis coverage leveraged to the AI trend are TSMC, DRAM Seoul Branch
makers, Intel, Inphi, NVIDIA, ASML, KLAC, and Hitachi High-Tech. Hisashi Moriyama
AC
(81-3) 6736-8601
AI training demand already rising rapidly, GPUs currently dominant: AI hisashi.moriyama@jpmorgan.com
training (computing demand for training AI algorithms) has been the first Bloomberg JPMA MORIYAMA <GO>
market to take off, primarily in hyperscale datacenters for large internet players. JPMorgan Securities Japan Co., Ltd.
We expect this segment to grow at a 49% CAGR, reaching $14bn by 2022. AC
Harlan Sur
AI inference the real deal, but likely a fragmented market structure: While (1-415) 315-6700
AI inference (putting AI into real world use-cases) may have a delayed take-off, harlan.sur@jpmorgan.com
we expect it to become the biggest market as use cases proliferate. We forecast Bloomberg JPMA SUR <GO>
J.P. Morgan Securities LLC
AI inference semis to be a $19bn market by 2022, growing at a 70% CAGR.
AC
Sandeep Deshpande
AI key for leading edge logic, high-end memory and semi capex growth: AI (44-20) 7134-5276
is key for the next leg of growth in leading-edge logic semis (GPU, accelerators sandeep.s.deshpande@jpmorgan.com
and CPU), high-end memory (higher bandwidth requirement for parallel Bloomberg JPMA DESHPANDE <GO>
processing), and consequently for increasing semiconductor equipment J.P. Morgan Securities plc
spending. We expect AI semis to be 6% of the global semi market by 2022. Albert Hung
(886-2) 2725-9875
Ushering in new era – ‘xPU’ startups and rising accelerator demand: AI is
albert.hung@jpmorgan.com
also ushering in a new breed of semiconductor startups focusing on specific AI J.P. Morgan Securities (Taiwan) Limited
training and inference solutions or ‘xPUs’. Furthering the trend of
Erica Wong
heterogenization of silicon in the cloud, we also see a rapid surge in demand for
(852) 2800-8568
accelerators (FPGAs, custom ASICs) in hyperscale datacenters. erica.wong@jpmorgan.com
See more views from our global semiconductor analyst teams on pages 43-53. J.P. Morgan Securities (Asia Pacific) Limited
Note that the AI semi market size estimate is based mainly on logic and
computing and does not include demand from memory, storage or networking
semiconductors, which would also grow rapidly along with rising AI adoption.
Table 1: Beneficiaries of rising AI semiconductor market
IC design houses NVIDIA, Inphi
Foundry/OSATs TSMC, Leading OSATs, Amkor
Memory Micron
IDM Intel, STMicro
Semi Equipment ASML, Hitachi High-Tech, KLA-Tencor
Source: J.P. Morgan.
See page 61 for analyst certification and important disclosures, including non-US analyst disclosures.
J.P. Morgan does and seeks to do business with companies covered in its research reports. As a result, investors should be aware that the
firm may have a conflict of interest that could affect the objectivity of this report. Investors should consider this report as only a single factor in
making their investment decision.
www.jpmorganmarkets.com
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Table of Contents
Investment Summary ...............................................................4
AI semiconductor market size to reach $33bn by 2022.............................................4
Training is initial market catalyst for AI semiconductors..........................................4
AI Inference the real deal and likely much bigger than training ................................5
AI seeing strong growth in the cloud and at the edge................................................6
Key stocks with meaningful AI exposure.................................................................6
What does AI mean for semiconductors?..............................8
The next growth driver after the smartphone era ......................................................8
Semis are critical in every step of the AI process flow..............................................9
IoT and AI / deep learning should go hand in hand.................................................10
AI fits well within the move towards accelerated computing and heterogenization of
the datacenter ........................................................................................................10
AI semiconductor market – $33bn by 2022 ..........................11
AI – Training and inference...................................................................................12
AI – Training market to grow at a 49% CAGR until 2022E ..13
AI – Training adoption picking up in key Internet vendors .....................................14
Strong datacenter growth driven by migration to public cloud ................................15
AI training silicon – Parallel processing is a key attribute.......................................17
Memory bandwidth also an important feature for training ......................................19
Floating Point Computing more common in use than Integer in deep learning ........20
AI training semis’ market size ...............................................................................22
AI – Training – a $14bn market by 2022................................................................23
NVIDIA GPUs have an early lead in silicon and ecosystem ...................................24
Other accelerators and ASICs are also being trialed ...............................................24
Integrated solutions likely to emerge as market matures .........................................25
AI – Inference – the real deal .................................................25
Inference should be much bigger than training but a lot more fragmented...............26
Inference in the cloud or at the edge? Four key decision factors..............................26
Inference in the cloud ............................................................27
Inference training market is still very fragmented...................................................27
A steeper adoption curve, rising to $8bn in 2022....................................................28
What kind of Semiconductor solutions fit inference?..............................................29
Inference at the edge..............................................................31
Embedded solutions likely to be common in consumer devices ..............................32
Market size and assumptions .................................................................................32
Smartphone AI-Inference market...........................................................................33
Automotive Inference market – ADAS key driver..................................................35
Surveillance market...............................................................................................38
Number of use cases likely to grow exponentially..................................................40
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Investment Summary
AI semiconductor market size to reach $33bn by 2022
In a very simplified sense, the process of We forecast the total AI-related semiconductor market size to grow at a 59% CAGR
AI / Machine Learning can be segregated to reach $33bn by 2022, driven by increasing use cases for AI applications, a
into 2 main steps – (1) Training, for the
system to learn and perfect a model or dramatic increase in AI training needs across hyperscale Internet vendors and
algorithm from massive datasets and (2) enterprises, as well as rising adoption of inference use cases in datacenters,
Inference, for the system to apply the smartphones, cars, and several IoT-related applications.
model on a real-life scenario or use case
like facial recognition, speech recognition.
Note that the market size estimate is largely based on logic and computing and does
See more details on AI Basics and AI/ not include demand in other areas which would rise in sync with AI adoption, such
Deep Learning value chain. as memory, storage, and networking semiconductors.
We expect the AI training to dominate the AI semi market in the beginning, likely
followed by a strong pick-up in the AI inference once the use cases broaden. By
2022, we expect AI inference to account for 57% of the AI semiconductor market.
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
While CPU is powerful, it runs on a serial GPU the early leader for AI training, much higher bandwidth requirements for
processing architecture, which, to some memory
extent, means it can only solve one task at
one time. Among AI training silicon solutions, we believe GPU (NVIDIA in particular) has
taken the lead thanks to its parallel processing characteristics while other accelerators
However, machine learning often involves and ASICs are also being trialed. We expect more integrated solutions (CPU-
millions and even billions of calculations,
which makes more sense for simultaneous
GPU/multiple accelerators) to start gaining traction once the market becomes more
computations and prompts the use of mature.
accelerators. GPU, FPGA and ASIC are
some common examples of accelerators. Besides chip processing capability, the speed of data transfer is also critical in
determining the efficiency of training, given its data-intensive nature. This
consequently calls for an upgrade in the memory architecture and stimulates the
demand for High Bandwidth Memory (HBM), which stacks memory chips vertically
onto one another.
Figure 2: AI training semiconductor market revenue Figure 3: AI inference semiconductor market revenue
US$bn, yoy growth (%) US$bn, yoy growth (%)
16 14 25
14 19
12 20
12
9 14
10 15
8 6 10
10
6
4 6
4 2 5
3
2 0 0 1
- -
2016 2017E 2018E 2019E 2020E 2021E 2022E 2016 2017E 2018E 2019E 2020E 2021E 2022E
AI training market AI inference market
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
AI cloud semiconductor market size should reach $22bn by 2022, in our view,
growing at a 61% CAGR.
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
DRAM
As machine learning requires large amounts of data processing, we expect a broader
adoption of High Bandwidth Memory and GDDR (graphics double-data-rate
DRAM). This bodes well for DRAM vendors such as SEC, SK Hynix and Micron in
the long term.
NVIDIA
NVIDIA is the dominant player in the AI/Deep Neutral Network (DNN) training
GPU market, with strong silicon offerings and growing software ecosystem support.
Although there are several vendors attempting to enter this market, we see NVDIA
well-positioned to maintain its market leadership in the medium to near term.
Intel
Intel has strengthened the muscles for AI through a number of M&A (Altera,
Nervana, Movidius and Mobileye) over the past few years. We are upbeat on Intel’s
FPGA opportunities in AI inference market and its Nervana Neural Network
Processor (NNP) solutions are still in the early stage but have good potential in the
AI learning market. The company is also able to address automotive demand through
Mobileye and low-power edge inference market through Movidius.
ASML
As most AI silicon solutions are likely to use leading-edge nodes (7nm and beyond),
the robust AI demand could translate into upside for EUV tools and benefit ASML.
We expect ASML to benefit from the higher revenue and GMs improvement from
EUV tools.
Hitachi High-Tech
Hitachi High-Tech is our top pick in Japan’s SPE sector given its high leverage to
elevated Foundry spending for leading edge (thus helping Hitachi High-Tech high-
end Scanning Electron Microscope and etcher business) to support AI and machine
learning related computing processor demand.
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Looking ahead to the next 3-5 years, although we do not see one single high-volume
consumer product such as Smartphone/PC which could drive meaningful top-line
growth in semis, we believe that the AI and deep learning opportunity represents a
strong growth area for semiconductor content growth, both in the cloud/datacenter
and in multiple edge devices.
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
We expect the AI semiconductor market to grow at the cloud as well as the edge
level. AI in the cloud is where adoption is happening earlier, and we expect the
market to reach $22bn by 2022, growing at a 61% CAGR. AI at the edge is likely to
evolve later, but is likely to grow faster in future years – we expect a market size of
$11bn by 2022, growing at a 57% CAGR.
Note that the market size estimate is largely based on logic and computing and does
not include demand in other areas which would rise in sync with AI adoption such as
memory, storage and networking semiconductors.
Figure 6: AI in the cloud – Semis’ market size and yoy growth Figure 7: AI at the edge – Semis’ market size and yoy growth
US$ bn, yoy growth (%) US$ bn, yoy growth (%)
25 22 12 11
10 9
20 17
8 7
15 12
6 5
10 7
4
4 2
5 2 2 1
0 0
- -
2016 2017E 2018E 2019E 2020E 2021E 2022E 2016 2017E 2018E 2019E 2020E 2021E 2022E
AI in the cloud AI at the edge
Source: J.P. Morgan estimates. Note: This includes AI Training and Inference in the datacenter. Source: J.P. Morgan estimates. Note: This is largely comprised of AI Inference in Edge devices
– use cases may be much bigger than what we anticipate in our analysis.
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
For the purview of this report, we are restricting the discussion largely to the training
and inference aspects of AI and their impact on semiconductor demand. The demand
creation for semiconductors used for data collection through sensors and other IoT
devices is also a very large and fragmented market.
Deploying more IoT devices with sensors would likely be the starting point. Take the
example of facial recognition; its development was largely accelerated by the
availability of huge photo databases, which were built on the contribution from
millions of smartphones and cameras.
There is a virtuous circle at work here, in our view, since credible AI use cases will
incentivize enterprises and even governments to collect more data and install more
IoT devices for machine learning.
Source: NVIDIA.
10
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
For example, Microsoft leads in deploying FPGAs in datacenters. The use of such
reprogrammable chips allows its developers to directly code on the hardware and
skip the software middle layer, which speeds up the overall processing.
For AI and deep learning, the same trend is repeating given that deep neural
networks require heavy parallelization in processing. GPUs and other accelerators
are seeing much bigger adoption for AI/deep learning training related workloads.
For AI inferencing, workloads are even more varied, given that the inference
requirements could be very application-specific.
We believe that rising AI adoption could lead to a significant growth in the adoption
of heterogeneous computing in the datacenter.
11
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
AI silicon should represent the fastest growing part of the semiconductor industry,
accounting for 6% of overall industry by 2022, in our view.
Source: WSTS, J.P. Morgan estimates. Note: We expect global semis market to grow 5-6% CAGR from 2018-2022.
12
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
AI Training refers to the part of the process flow (as indicated above) which deals
predominantly with training the deep learning/AI algorithms using large sets of data
and building predictive power and intelligence. This is extremely computing-
intensive and requires very large parallel processing capabilities and happens mainly
through AI servers (servers accelerated with GPUs or other silicon tailored for deep
learning workloads).
AI Inference refers to the use of the trained model in real-world use cases – be it facial
recognition or robotics or surveillance. In most of these instances, the algorithm is
hardwired into semiconductors on the edge (in terminal devices) or the devices access
inference-related hardware on the cloud. Inference-related workloads are likely less
computing-intensive, but may be very sensitive to latency and accuracy due to real-
world performance implications. Silicon needs are likely to be very fragmented, from
AI inference servers (with accelerators for inference workloads) in the cloud to simple
processor chips with embedded AI processing units.
AI Training to take off first, but AI Inference should by far be the bigger
market
AI Training is already taking off (since 2016) as multiple companies see value in
training algorithms for multiple use cases. Inference-related use cases are likely to
follow, once algorithm training reaches mature levels and deployment in various use
cases picks up speed. Eventually, inference-related AI workloads should be much
bigger than AI Training demand, in our view.
13
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
As the value of developing deep learning algorithms becomes more apparent for
diverse use cases and datasets, we expect rapid growth for AI training in the next few
years. Internet vendors are by far the leaders in AI training adoption, but we expect
traditional enterprises also to follow through over the next few years.
While on-premises AI training is a costly exercise (an NVIDIA DGX1 GPU server
costs $149k), the widespread offering of deep learning training instances by various
public cloud providers should make it much more affordable for a vast number of
enterprises.
Google’s use of AI/ML spread across search (RankBrain), hardware (Google Home/
Pixel Phones/Pixel Buds), Google Cloud (TensorFlow/various APIs), Waymo and
AlpahGo.
Amazon
Amazon was an early adopter of AI, which is prevalent throughout the company’s
retail and cloud business segments. The four key areas where AI is core to Amazon’s
offering are: 1) Amazon’s product recommendation engine, 2) Amazon Alexa, 3)
Amazon Web Services (AWS), and 4) Amazon Go. We note there are many other
areas where Amazon utilizes AI, including its supply chain, demand forecasting,
capacity planning, fraud detection, translations and more.
14
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Alibaba
Alibaba identifies four key elements of AI (the concept of “CUBA”), which includes
cloud computing, use case, big data and algorithm. We think the company has
competitive edges in all these areas with its strong cloud infrastructure (No.1 cloud
service provider in China), extensive use case (from ecommerce, local service to
digital content), rich data resource generated from various use cases and seasoned
technology team.
Tencent
Tencent recently unveiled its AI strategy and related product lines in four major
aspects: (1) establishing AI laboratories, (2) deploying AI applications in its games,
content, social platform, financial service and healthcare, (3) providing AI-as-a-
service through Tencent Cloud, and (4) building an “open-source” ecosystem.
Baidu
Baidu’s AI strategy comprises four major projects:
15
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
For instance, Amazon Web Service and Google Cloud Platform both offer machine
learning engines that allow users to create deep learning models without managing
the hardware components, at affordable prices.
These services not only enable enterprises to leverage AI at more controllable cost,
but also encourage a faster adoption of machine learning, in our view.
This is in-line with our long-held view that more of the IT spend in datacenter is
likely to go towards the public cloud. In case of AI, this migration is likely
happening at the start itself (AI on demand models already available) and could drive
much quicker adoption.
16
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Enterprise in-house servers could also see some pickup, but remain limited
We still expect some enterprises to start building their own AI datacenters, especially
for industries who keep very sensitive data or players who have intensive AI
applications. For example, some auto OEMs are currently researching in ADAS or
automated driving, as this requires continued AI training to achieve higher accuracy
and lower failure rate.
However, economics work against the on-premise model (since demand for AI
training is likely to be happening only infrequently and not likely to be a regular and
predictable workload) and we expect it to be limited to only very large enterprise
vendors with significant spending power.
Feature extraction is one of the important steps in neural network training. During the
process, the neurons in each layer would convert the information (ex: image) into a
3-D input, process the data with its specific weight and biases factors, and generates
an output, which will go to the next layer for further processing. The inputs would be
refined and attributed into a smaller category after the processing in neurons in each
layer. At the end of the process, the neural network will generate an output. If it is
wrong, the neural network will revise the weight and biases for each neurons and see
whether it improve the success rates.
17
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Source: NVIDIA.
Source: Mathworks.com. Note: Convolutional neural network is a subset of deep neural network.
For instance, Intel’s most advanced server CPU, Xeon Phi 7290 has 72 cores, each
running 1.5GHz, while NVIDIA's most advanced GPU, Tesla V100, has 5120 cores
which can each be run in parallel.
Some vendors are also trialing FPGAs or specific ASICs that are custom built for
deep-learning, for large workloads. Google, for instance, claims that it can use
merely eight TPUs and an afternoon to train a translation model, which would take a
full day for 32 GPUs.
18
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Source: NVIDIA.
High bandwidth memory (HBM), which stacks memory chips vertically onto one
another, thus emerges as a new viable solution. This new way of 3D packaging not
only expands the memory interface (from 32 bit in GDDR5 to 1024 bit), but also
places each DRAM chip closer to the processing unit and improves the efficiency of
space use, which makes it capable of delivering faster data transfer with smaller form
factors and lower power consumption.
Source: NVIDIA.
19
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Given HBM’s superiority in data transmission, it has been adopted in most of the
advanced GPU and AI related chips, with users including NVIDIA, Google and Intel.
All leading memory players have been working on HBM, or at least related products
that use 3D packaging. Samsung and SK Hynix both commenced mass production of
HBM2 in 2016, and Samsung even further announced plan to increase 8GB HBM2
capacity in July 2017 to meet the strong demand. Intel and Micron are also
partnering to develop a similar technology named Hybrid Memory Cube (HMC).
Figure 21: Floating-point operations per second for CPU and GPU
Source: NVIDIA.
20
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Source: NVIDIA.
However, for computers that process information in discrete form (think of the
difference between humans listening to a continuous stream of speech and computers
converting that into a series of separate binary code), it is extremely hard and at times
impossible for computers to directly output a perfect curve line. Instead, it makes
more sense to divide the curve into multiple small straight strokes and to increase the
number of small strokes – while each stroke could be a less "precise" representation
on its own, the fact that the curve is composed of more strokes effectively
compensates for the individual loss, and makes a better and more precise
representation of the full picture.
Similarly for deep learning, while each individual neuron in the neural network can
sacrifice some degree of accuracy with the use of floating point computing, the
existence of more neurons is more than enough to make up for that loss. It is also a
more efficient use of finite computing resources. This bodes well for GPUs that are
good at parallel processing or even ASICs that are specialized in heavy matrix
multiplications.
21
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Table 7: Normal datacenter server versus AI server – spec and cost comparison
Normal Server AI server
Specification Unit Cost Specification Unit Cost
CPU: E5-2650Lv4 (12 CORE) 2 $2,658 A lower grade of CPU vs. E5-2650Lv4 2 $1,994
GPU: Tesla V100 0 $0 GPU: Tesla V100 6 $42,000
Memory: 16GB, DDR4-2400, ECC 12 $1,716 Memory: 16GB, DDR4-2400, ECC 18 $2,574
Storage: Boot SSD, 120GB 1 $125 Storage: Boot SSD, 120GB 1 $125
Storage: 480GB, Medium Endurance SSD 2 $784 Storage: 480GB, Medium Endurance SSD 2 $784
Network Card: None 0 $0 Network Card: None 0 $0
Chassis Costs 0 $0 Chassis Costs 0 $0
Motherboard 1 $299 Motherboard 1 $299
CPU Heat Sink 2 $28 CPU Heat Sink 2 $28
Power Supply 1 $95 Power Supply 1 $143
Storage Backplane 1 $75 Storage Backplane 1 $75
Drive Caddies 4 $52 Drive Caddies 4 $52
Fans 5 $50 Fans 5 $125
Internal Cables 1 $20 Internal Cables 1 $20
Riser Cards 1 $19 Riser Cards 1 $19
Sheet Metal Case 1 $100 Sheet Metal Case 1 $100
Assembly Labor and Test 1 $150 Assembly Labor and Test 1 $150
10% Markup 1 $478 10% Markup 1 $478
Total Cost $6,650 Total Cost $48,966
Price difference 636%
Source: J.P. Morgan.
22
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
While it is not exactly the same, the last time we saw this kind of step-up in spending
was during the early stage of smartphone adoption, when consumers saw sufficient
value to be willing to spend $600-700 on iPhones and much higher dollars on data
plans, compared to $300-400 for feature phones previously, with largely cheap voice
based plans.
% of cloud server demand 30% 35% 40% 45% 50% 54% 57%
% of enterprise server 70% 65% 60% 55% 50% 46% 43%
Server shipment for AI training (k) 57 201 334 496 691 864 1,012
AI training server penetration rate 1% 2% 3% 5% 6% 8% 9%
ASP (US$)
GPU based (using latest GPU in the market ) 2500 3,000 3,060 3,121 3,043 2,967 2,893
Accelerators 600 618 637 656 675 696 716
Number of GPUs per server 3.0 4.0 4.8 5.5 6.3 7.0 7.5
Number of other accelerators per server 3.0 3.5 4.0 4.5 5.0 5.5 6.0
Incremental semis value from AI training server (US$mn) 377 2,014 3,859 6,393 9,362 12,096 14,921
Incremental logic Semis value for AI Training Server (US$mn) 364 1,920 3,682 6,180 9,023 11,621 14,295
Source: J.P. Morgan estimates, Company data.
23
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
We currently see four GPUs in each AI server on average, but the number of
GPU deployed is likely to pick up gradually, given rising compute requirements.
We take out the value of memory embedded in accelerators from the incremental
semis value, in order to get the logic semis value from AI training servers.
NVIDIA GPUs have an early lead in silicon and ecosystem
As we described before, parallel processing architecture in GPUs makes them
naturally suited to AI/deep learning workloads. In addition, NVIDIA has curated
demand in the early use cases (such as scientific computing, HPC and
supercomputing applications) and has a pretty sizeable lead with its programming
language for AI use cases (CUDA) and widespread support for most of the machine
learning software libraries and frameworks such as TensorFlow, Caffe2, Torch and
Theano. In addition, AI training workloads benefit from major parallelization and
hardware acceleration while being less sensitive to power consumption needs (since
AI training may not be happening all the time). Among currently available silicon,
GPUs are best suited to such workloads, in our view.
Hence, we expect GPU to dominate the AI training silicon market with its parallel
computing algorithms, and other accelerators (FPGA/ASIC) to account for only a
small portion of the market. NVIDIA is the dominant player in the AI/Deep Neutral
Network (DNN) training GPU market, with strong silicon offerings and growing
ecosystem support.
Google’s second-generation Tensor Processing Unit (TPU) is one of the most well-
known attempts, in our view. The in-house designed ASIC can deliver up to 180
teraflops of floating point performance, which bodes well for the data-intensive
processes. This significantly challenges the existing GPU solutions, as Google has
already claimed that it takes merely eight TPUs and an afternoon to train a translation
model, which would take a full day for 32 GPUs.
24
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
It is important to note that these alternative solutions have yet to prove their potential
for large-scale deployment for a wide array of AI training use cases, and GPU
remains the most commercial option so far. Yet, even beyond the application for
deep learning, these accelerators still have a lot of potential within the datacenters, as
they are also very capable of accelerating other workloads, such as search engine
queries. Thus, in any case, they are likely to see increasing adoption in datacenters,
as the composition of the datacenter becomes more heterogeneous.
Once the market matures, we should see wider adoption of more integrated solutions,
likely with CPU-GPU integration or integration of multiple accelerators on the same
chip.
For example, Intel hired the industry expert Raja Koduri from AMD group as the
chief architect and senior vice president of the newly formed Core and Visual
Computing group. We believe Intel not only aims to develop the discrete GPUs, but
also to expand the application of discrete GPUs into AI and automotive by
integrating CPU and GPU architecture in the medium to longer term. Besides, Intel
began shipping Stratix 10 FPGAs in October 2017, which utilizes the FPGA
framework to accelerate the training process.
Given the importance of parallel computing for DNN training, GPU or any kind of
parallel computing architecture based silicon, is still likely to remain the core of AI
training silicon for a long time to come, in our view.
In the medium to longer term, we believe the inference silicon market will be much
larger than the training market given the likely wide deployment of trained neural
network in cloud and edge devices. After all, the purpose of training is to
commercialize and monetize the deployments of DNNs, which lead to a bigger
silicon market size at some point.
25
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
We admit that it is tough to predict all the various use cases that could spring up for
AI and deep learning in various verticals. In our exercise, we restrict the demand
forecasting to AI inference in the cloud as well as three use cases on the edge
(Smartphones, Autonomous driving and Surveillance) to illustrate the demand
impact on semiconductors from AI inference.
26
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Maturity of algorithm
Besides costs, algorithm maturity could also determine where the inference workload
resides. Mature models that have less iterative training needs are the most likely to be
deployed at edge, given their readiness to be compressed and fit the limited capacity
of on-device chips. One example of more mature deep learning models is facial
recognition which once mature may not require very frequent changes. (Apple has
embedded machine learning on its Neural Processing Engine in the A12 application
processor for iPhone X Face ID functionality).
Availability of bandwidth
Availability and cost of bandwidth is another concern. For always-connected devices
that have a stable network connection (smart speakers again, which are connected
through home networks or broadband connections), it may be more feasible to keep
the bulk of the inference workload online.
Latency of computing
Closely linked to the bandwidth question is also the latency tolerance of the AI use
case. For very low latency use cases such as autonomous driving, real-time
surveillance and safety, or for mission-critical applications in industrial automation,
AI inference may need to be largely done at the edge.
For inference in the cloud, the use cases are likely to be performance-heavy, with less
mature algorithms which are compute- and memory-intensive and likely to require
frequent re-training.
At this point, we believe that most of the inference related workloads in the cloud are
still handled through x86 CPUs. However, over a period of time, we believe there
will be a widespread adoption of accelerators (GPUs, FPGAs, ASICs, SoCs) which
could improve flexibility, parallelism (hence performance), and power efficiency for
specific workloads.
27
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
% of server demand from datacenter 30% 35% 40% 45% 50% 54% 57%
% of enterprise server 70% 65% 60% 55% 50% 46% 43%
Server shipment for AI inference (k) 17 88 162 438 829 1,351 1,892
AI inference server penetration rate 0% 1% 2% 4% 8% 12% 16%
ASP (US$)
Accelerators (excluding stock Intel server CPUs) 600 615 630 646 630 614 599
Server CPUs 1,500 1,545 1,591 1,639 1,688 1,739 1,791
Number of accelerators per server 4.0 3.5 5.0 5.5 6.0 6.5 7.0
Incremental Logic Semis Value from AI Inference Server (US$mn) 26 157 381 1,261 2,780 5,213 7,977
Source: J.P. Morgan estimates, Company data.
28
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
29
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Figure 26: NVIDIA’s Tensor RT optimize neural networks Figure 27: Tensor RT has 3.7x faster inference on V100 vs. P100; 18x
Faster inference of Tensorflow models on V100
The company has also indicated several design wins beyond US hyperscale vendors
– such as Alibaba, Baidu, Tencent and JD.com for the Tesla V100/TensorRT and
large enterprise customers like Huawei, Inspur and Lenovo for the NVIDIA’s HGX-
based servers with Volta GPUs, which is more power efficient and has a smaller
form factor.
However, the criticism of GPU remains that it may not be power-efficient enough for
inference workloads and could be overkill for many inference use cases. Cloud-based
provisioning (pay-as-you-go) could address some of these issues, but adoption of
such models is still in the early days.
ASICs – Best fit, but only for very large use cases
The most efficient silicon architectures for DNNs are likely to be the ones designed
from scratch (ASICs) so that designers could eliminate redundant features for other
applications and optimize the power efficiency. On the other hand, since the
designers fix the architectures for specific programs, ASICs will be unable to change
framework and meet the new demand.
30
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
We have seen hyperscale players designing their own custom ASIC for machine
learning, with the most well-known being Google’s Tensor Processing Unit (TPU).
Designing ASIC solution is usually costly (tape-outs, development costs) and takes a
long time (1-2 years) for product development. Therefore, it requires a large-scale
use case to justify the ROI. Google, for example, claims that its TPU solutions have
saved costs for building another 12 datacenters for AI workloads, but such benefits
may only accrue when demand reaches significant scale.
While the development cost for custom ASIC is much higher, some argue it could
provide the best performance with the most tailored structure. For instance, the
Matrix Multiplier Unit in Google’s TPU can host 65k Arithmetic Logic Unit (ALU),
which can significantly lift the number of operations per cycle.
FPGA is the most flexible silicon solution, and usually power efficient as well
One challenge for custom designs is that deep-learning algorithms are rapidly
evolving, at least in early stage of development; therefore, Field-Programmable Gate
Array (FPGA), which is an integrated circuit that allows users to program (and
reprogram) the hardware configuration, offers the most flexibility and eases the
concerns. This feature leaves much higher flexibility and room for customization to
fit the need of each specific machine learning model, which could in turn boost
processing speed and energy efficiency as well. We have seen MSFT widely used
FPGAs for its Catapult and Brainwave projects. Other cloud vendors such as
Facebook, Baidu are also using the FPGA solution. Xilinx and Intel Programmable
Solutions Group (PSG, formerly Altera) are the main FPGA suppliers. However,
FPGA has drawbacks in terms of cost and scalability, once the solution matures.
In the next few years, we believe that all of these options will co-exist for different
AI inference workloads and use cases. For purposes of estimating market size for AI
Inference in the cloud, we approximate all these solutions as accelerators and do not
differentiate between GPU, FPGA, ASICs and other forms of accelerators. What
looks quite obvious is that the accelerator adoption is likely to pick up meaningfully
for AI inference workloads in the datacenter.
We believe inference chips will eventually make their way to most edge devices, but
high-end consumer devices and auto/industrial applications are likely to witness a
faster pickup for now, given the ability to absorb incremental costs.
31
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
In the case of smartphones, for example, Apple and HiSilicon have a neural
processing unit in their APs, while MediaTek will integrate Vision Processing Unit
(VPU) in its mobile SoCs. In such cases, the inference power of SoCs is unlikely to
be comparable to discrete solutions, but would be able to process low-level inference
workloads at the edge.
However, for advanced functionalities such as Level 3/4 ADAS functions in auto or
real-time surveillance in security cameras, discrete silicon may still be required to
handle the inference workloads. This market is still evolving and we see a slew of
new vendors attempting to develop tailored ‘xPU’ solutions for different workloads.
In our analysis, we are looking at three key use cases at the edge – AI on the
Smartphone (only accounting for incremental silicon for AI), ADAS or assisted
driving and Surveillance Cameras. These are use cases for which precedence exists
and the adoption trends are clearer, in our view.
However, we accept that this is still a narrow subset of AI use cases – with multiple
areas such as computer vision, NLP and robotics all likely to see adoption in the next
3-5 years, which could lead to material increase in the AI inference market size.
Based on our assumptions, we expect AI-inference market at the edge to account for
$11bn by 2022, a 57% CAGR from 2017 to 2022. Smartphone AI application is
likely to be the bigger market initially, before ADAS and automotive related
applications pick up growth from 2019-2020 timeframe. Surveillance solutions are
likely to see rapid adoption and comprise around 22% of the market size by 2022.
32
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Android smartphone shipment 1,264 1,317 1,354 1,377 1,426 1,477 1,529
AI penetration rate 0% 6% 10% 20% 30% 38% 45%
Size of AI smartphone (units in millions) - 177 348 544 697 823 957
Percentage of Smartphone implementing AI 0% 12% 22% 33% 41% 47% 53%
Incremental revenue from AI smartphone (US$ in mn) - 650 1,253 2,069 2,746 3,252 3,783
Source: J.P. Morgan estimates, Company data.
33
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
1. Apple
Apple is one of the most vocal smartphone makers on its plans to incorporate AI
processors in mobile devices. In its recently released iPhone 8/X, Apple introduced
A11 bionic chipset that features four efficiency cores, two performance cores, an in-
house designed three-core GPU, and a new Neural Processing Unit dedicated to AI
inference. This chip not only speeds up iPhones' computing performance, but also
enables the fulfilment of graphic-intensive functions, such as Face ID and AR/VR.
A11 bionic chipset is manufactured on TSMC’s 10nm process.
Figure 29: Apple's A11 Bionic chipset (1) Figure 30: Apple's A11 Bionic chipset (2)
Source: Apple.
Source: Apple.
Besides making advancement in hardware, Apple has also released a new software
tool (Metal 2) and framework (CoreML) to encourage developers in including more
sophisticated 3D and even AR graphics in their mobile applications.
2. Qualcomm
Qualcomm released its Snapdragon 835 processor with machine learning capabilities
in January 2017. The processor features ARM Cortex-based Kryo 280 CPU with four
performance cores and four efficiency cores, Adreno 540 GPU, Spectra 180 ISP
(image sensor processor) and a Hexagon 682 DSP (digital signal processor) that
supports deep learning framework.
The chipset is currently used in multiple flagship Android models, including the US
models of Samsung S8 and Note 8, Google’s Pixel 2 series, Sony XZ Premium, LG
V30+, HTC U11 and the recently launched Razer phone.
3. HiSilicon
Huawei has joined the race towards AI in mobile by introducing its HiSilicon Kirin
970 processor in September 2017, which is used in its flagship model, Huawei Mate
10 later released in October. It is also manufactured on TSMC’s 10nm process.
Similar to other AI mobile processors, Kirin 970 features 8-core CPU, GPU and ISP,
but on top of that, it includes a Neural Processing Unit (NPU) that is specialized for
AI applications. Huawei claims the new NPU would be able to deliver superior
performance in computer vision and speech recognition, significantly outperforming
that of CPU/GPU. According to Digitimes, Huawei’s NPU is leveraging the IP from
Cambricon Technologies, a China-based AI start-up.
34
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
4. MediaTek
MediaTek is also chipping into the AI market with its Visual Processing Unit (VPU),
a Tensilica DSP specifically engineered to work with ISP for image-related
processing, such as real-time Depth of Field. This not only accelerates the data-
intensive calculations for graphic processing, but also frees up CPU and GPU
resources to enhance overall compute performance.
While Helio P30 SoC is the first MediaTek product to feature VPU, its use is limited
to enhancing graphic processing. We expect MediaTek to launch P70 SoC with true
AI capabilities in 2Q18, which should empower devices for AR/VR applications,
image and facial recognition (2D/3D).
Source: Mediatek.
We forecast automotive inference market to grow by 68% CAGR from 2017 to 2022
and reach $5.3bn.
35
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
% Adoption
Level 2/3 0% 2% 4% 5% 7% 8% 10%
Level 4/5 0% 0% 0% 1% 1% 2% 2%
AI as an enabler / drive assist 0% 0% 0% 1% 2% 3% 4%
Total Semis market from AI in Automotive Space (US$mn) - 400 681 1,962 3,152 4,247 5,271
Source: J.P. Morgan estimates, Company data.
1. NVIDIA
NVIDIA is the leading provider of GPU-based autonomous driving solutions, with
OEM partners including Toyota and Tesla. Its offerings span across datacenter
training (DGX-1), software (DRIVE platform software) and edge hardware (DRIVE
PX series).
36
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
While driving may sound like a single coherent action to a human, it requires a
delicate coordination of multiple fronts. To drive itself, the vehicle needs to be able
to (1) fuse different forms of sensor data collected from cameras, radar and Lidar, (2)
localize its position with very high accuracy, (3) track and predict the movement of
surroundings, and (4) plan the most optimal path by sieving through all the data.
Source: NVIDIA.
37
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
While EyeQ5 chip remains relatively distant to the mass market, Mobileye's EyeQ4
chip is already sampling. It has 4 CPU cores, with 14 accelerator cores including
VMP, MPC and PMA, which enables it to deliver 2.5 TOPS at 6W.
3. NXP
NXP also attempts to take a slice of the auto AI chip market with its BlueBox
solution, an open-sourced platform for developing autonomous driving solutions.
The BlueBox engine comes with two NXP’s in-house chips – an automotive vision
processor and an embedded compute processor.
Surveillance market
We expect surveillance cameras to adopt more AI functions (ex: facial recognition)
in the next few years given the rising requirements for security and the addition of
incident prevention and predictive functionality. We forecast AI penetration rate to
reach 22% by 2022 and incremental AI revenue for cameras to reach $2.1bn by 2022.
AI revenue for AI camera (US$ in millions) 28 127 301 811 984 1,539 2,125
Source: J.P. Morgan estimates, Gartner.
Smart surveillance
For time-sensitive applications such as public security, it is crucial for smart
surveillance cameras to operate at low latency and high accuracy, which prompts the
need to deploy more computational power at the edge for preliminary processing.
GPU is by far the most dominant accelerator in the space. Chinese players are
particularly aggressive and leading in AI surveillance solutions, given a nationwide
deployment of public security cameras and strong government incentives.
38
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
HikVision
HikVision has launched a family of deep learning products from DeepinMind NVR
(Network Video Recorder) to DeepinView IP cameras.
Figure 35: HikVision's DeepinMind NVR Figure 36: HikVision's DeepinView IP camera
Unlike previous NVR models that run on CPU, DeepinMind NVR uses NVIDIA
Jetson GPU to perform advanced graphic processing functions. For instance,
DeepinMind NVR is able to read through images and filter out false alarms that are
triggered by non-human objects, with accuracy as high as 90%.
The enhanced capabilities of HikVision’s edge devices enable its solution to perform
more computer vision functions at a faster speed and higher accuracy, such as human
detection, facial recognition, people counting, and vehicle management.
Source: NVIDIA.
39
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Of note, we think the adoption of edge inference could accelerate in smart speakers
when natural language processing (NLP) gets mature. As a result, the NLP models
could be readily available for compression, and would be able to fit in smaller chips
that are available at lower cost.
While we acknowledge that most start-ups are taking on the software and algorithm
fronts given limited capital now, we also see a rising number of start-ups that target
silicon.
Most of these AI semiconductor startups are banking on the fact that even
GPU/FPGA solutions are not purpose-built for deep learning, which leaves the
opportunity open for creating tailor-made solutions for various AI and deep learning
work-loads. Some examples of these challengers are Adapteva, Cerebras Systems,
Deep Vision, Graphcore, and Wave Computing.
China has its own share of AI silicon startups as well, such as DeePhi, Horizon
Robotics, and Intellifusion, while even cryptocurrency mining silicon vendors such
as Bitmain are entering AI silicon market.
There are also vendors, who are recognizing that embedded silicon may be the way
to go in many inference applications and are hence developing AI-centric processor
cores (xPUs) that they could license to silicon makers (for instance, Cambricon
licensing its NPU to HiSilicon for its Kirin 970's NPU or CEVA – not a startup –
attempting to license computer vision based DSP cores).
40
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
As deep learning and AI become more widely adopted, we believe large Internet
players are likely to increase their in-house semiconductor design and development
efforts. Already, Google has demonstrated its in-house offerings like Google
TPU/TPU2 which are tailored solutions for Google’s AI workloads.
We expect this trend to proliferate among US and Chinese Internet vendors. Leading
ASIC designers are already seeing demand for multiple development projects coming
from for large Internet vendors tailored regarding AI/ deep learning. This is likely to
have some impact on merchant silicon market for datacenter semiconductors, just
like it diminished the TAM growth for mobile silicon in the smartphone era.
41
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
However, we believe AI could become the next volume driver for leading-edge
semiconductors, and the inference needs at edge devices also require high
performance and energy efficiency, which may be achieved by continued
semiconductor process migration.
For a deeper dive in EUV, please refer to EUV will prolong the life of Moore's Law
Sandeep Deshpande’s More of Moore- Our European analyst, Sandeep Deshpande, expects EUV to come to market in 2019
ASML reinstating Moore’s Law—a key AI
enabler (note). The text in this section is with improving throughput (ASML’s NXE3400B already>100wph). We believe
adopted and modified from the note. EUV deployment could reduce costs meaningfully by process simplification and
yield improve and keep Moore's Law alive in the next few years.
Figure 38: EUV tool throughput has crossed 100 wph above which tool can be used for HVM
Source: ASML
Table 17: Converting into a full EUV process would substantially reduce costs
28nm immersion 10nm immersion 7nm immersion 7nm EUV
litho only litho only litho only
Number of lithography steps 6 23 34 9
Critical alignment steps 7 36-40 59-65 12
Source: ASML.
42
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
TSMC has indicated that High Performance Computing (of which AI is an integral
part) will contribute ~25% of revenues in 2018, and HPC is becoming the primary
growth driver for the company, a year earlier than anticipated.
Keep Moore’s Law alive – market leaders to maintain the technology lead
We believe inference at edge devices will take higher computing power and require
power efficiency, which could be achieved by node migration. The huge volume and
revenue potential also justify the heavy capex investment. Therefore, Foundry
technology leaders have incentives to invest in 7nm with EUV deployment and more
advanced technology to capture the potential market demand. In doing so, the current
market leaders could also retain the technology gap to the laggards. We believe that
among Foundries, only TSMC/SEC/GlobalFoundries are able to address the AI
demand with its 7nm solution.
43
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Source: Company data, J.P. Morgan estimates. Note: Assuming 7-8% revenue growth for TSMC from 2020 onwards.
OSAT
AI chipsets usually have larger die size and require integration between logic and
memory chips, which complicates the assembly processing. We believe both TSMC
(with its CoWoS) and major OSAT names (with WLP solutions, including interposer
solutions) could benefit from the higher value add for such packaging process.
44
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
However, only leading OSATs can enjoy the AI trend due to high capital
investment
Although OSATs are also able to ride the AI trend in the medium to longer term, we
believe the beneficiaries are restricted to industry leaders such as Amkor given the
high capital investment in the leading-edge technology. We believe most advanced
products could use 2.5D packaging technology while mainstream products are likely
to adopt flip-chip solution.
45
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
46
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
3D XPoint, under development at Intel and Micron may offer significantly denser
and less expensive alternatives to DRAM. However, they are not believed to be
commercially viable before 2020, and hence currently remain restricted to very niche
applications only.
Supply-side implications
SEC (mid-40%), SK hynix (mid-30%), and Micron (low-20%) account for almost
the entire supply-side for the server DRAM industry. In response to the high demand
from AI servers, development of HBM and GDDR solutions are at the forefront of
all three industry leaders.
47
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
While GPU technology, with parallel processing, is useful for training, inference,
with less parallelism, lends itself more toward CPU, DSP and application-specific
architectures. Moreover, ASICs (custom solutions such as Google TPU) and FPGAs
(with increased flexibility) are also being used for deep learning inference and other
acceleration applications. For both edge and cloud applications, there has been an
increasing trend of insourcing IP as witnessed by Apple with its A-series apps
processors and by Google (e.g. TPU, that we have discussed in prior research that
leverages third-party IP).
Intel also announced in October 2017 its Intel Nervana Neural Network Processor
(NNP) that is purpose built architecture for deep learning with the goal of providing
flexibility for all deep learning applications while making core hardware components
more efficient. Unlike GPU technology that has been repurposed for AI/deep
learning, Intel’s NNP was purpose built for AI, prioritizing scalability and numerical
parallelism. Intel believes Nervana can allow for higher degrees of throughput while
reducing power per computation. Without providing details, Intel announced it is
working with Facebook on AI applications with the Nervana platform. Intel has also
discussed Nervana being applicable for healthcare, social media, automotive and
weather applications. Intel Nervana NNP was on track to ship before the end of 2017
with revenue likely in 2018.
48
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
As our colleague Sandeep Deshpande discusses below, EUV is poised to be used for
advanced manufacturing at 7nm and below design rules where AI/deep learning
chips will be manufactured on in the coming years. Within our coverage universe,
KLA-Tencor benefits the most from EUV adoption. KLAC is our top pick in
semicaps on strong product cycles, market share gains and SAM expansion, best-in-
class gross/operating margins and growing shareholder returns.
49
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
As AI/deep learning processors for NVIDIA and other companies are manufactured
by foundries such as TSMC, we can expect growth at SPE companies benefiting
from foundry-related capital investment to exceed overall capital investment growth
in the near/medium term.
Our bottom-up base-case forecast of 0.7% growth for SPE capital spending breaks
down into logic +1.2% YoY, foundries +11.1% YoY, DRAM +14.1% YoY, and
NAND -16.5% YoY. Although we expect Samsung Electronics (covering analyst: JJ
Park) to limit 3D NAND and related spending following three years of growth, we
forecast double-digit growth for foundry-related capital investment.
Tokyo Electron, which ranks fourth globally in Wafer Fab Equipment by Gartner
(Figure 42), holds 100% share for coaters/developers for EUVL (extreme ultraviolet
lithography). Its broad range of other products for foundries includes etching
equipment, CVD (chemical vapor deposition) equipment, cleaning equipment, and
wafer probers. Hitachi High-Technologies is strong in CD-SEM technology for
measuring gate width, and aims to expand its market share for silicon etching
equipment as well. In addition to core memory test systems, Advantest is benefiting
from demand growth for logic test systems such as the T2000 and the V93000.
Screen Holdings’ core product is wafer cleaning equipment. On January 18 we
upgraded Hitachi High-Technologies to Overweight as a foundry-related play.
50
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Grand Total 55,997 40,519 23,105 45,859 56,718 52,929 51,019 55,318 54,718 63,789 84,592 85,147
YoY 7.7% -27.6% -43.0% 98.5% 23.7% -6.7% -3.6% 8.4% -1.1% 16.6% 32.6% 0.7%
Source: Company data and J.P. Morgan estimates
51
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Source: Gartner, Company data. Compiled by J.P. Morgan. Images used with permission
Source: Gartner
52
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
53
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
AI Basics
Artificial intelligence, or machine learning, can be imagined as a machine digesting
through a massive amount of data to figure out a model, which can be used to infer
the desired output when given similar parameters in other scenarios.
This is vastly different from traditional programming, where humans would instruct a
predefined set of algorithm and the role of computers is merely to process and
execute. In machine learning, computers would have to take one step further, to
“learn” from the given data and figure out the program themselves. As a result,
machine learning’s requirements for (1) the amount of data and (2) the computational
power are much higher.
Consequently, machine learning also demands more memory (to store the data) and
more efficient processors (to fulfil the need of computational power).
Technically, the designers would train the neural network by running a large set of
database. The neural network would digest through a massive amount of data to
figure out a model, which can be used to infer the desired output when given similar
parameters in other scenarios. For inference, there will be different stages of data
processing. Take image processing as an example, the first step for machine learning
could be depicting the frame of the subject, followed by recognizing specific pattern
in the picture which belongs to different categories, and then the color identification
etc. In the end of the process, the neural network would generate the results and
refine the processing flow if it turns out to be wrong.
Training: During the neutral networking training process, a set of data is put into
a framework, which is trained to classify data and generate the desired results.
Inference: The process of running a trained neural network to classify data with
labels or estimate some missing or future values is called inference.
54
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Figure 44: Deep learning: difference between training and inference process
Source: Intel.
55
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
56
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
AI Glossary
AI-related terminology
Artificial intelligence
Artificial intelligence (AI) can be thought of as machines simulating human
intelligence. Unlike traditional rule-based programming that requires human pre-
defined instructions, AI machines have the capability to learn and improve
themselves by sieving through datasets, akin to how humans learn. While AI is still
in the development stage, it has already made an impact in various verticals, with
applications such as facial recognition and speech analysis.
AI as a service
The high entry barriers in technological expertise and capital investment to AI
development give rise to a marketplace where leading AI players offer AI
capabilities as rentable services. Sitting on a huge amount of compute resources and
leading-edge technology, ISPs easily emerge as some of the largest service providers
in the space. For instance, AWS and Google Cloud platforms both offer APIs for
intelligent functions such as image and speech analysis. While there are also some
specialized players such as API.AI and Clarifai, they are increasingly being
consolidated by the big players.
Accelerated computing
Accelerated computing refers to the approach of using additional computational
hardware on top of CPU to speed up the processing. This is particularly important to
deep learning and big data analytics, as they often include compute-intensive
workload that can be better handled by specialized architectures such as GPU /
FPGA / custom ASIC. “Outsourcing” this workload frees up CPU capacity and lifts
the overall processing efficiency.
Heterogeneous computing
As the name suggests, heterogeneous computing involves the use of two or more
processing cores, most commonly CPU and GPU. As the system includes specialized
processor(s) to handle specific tasks, it can enhance compute performance or attain
better energy efficiency.
Cloud computing
Unlike conventional on-premise computing, cloud computing or public cloud
provide users with the option to purchase computing resources (including networks,
servers, storage, applications and services) on demand and pay per usage. This not
only gives users higher flexibility by renting instead of owning the capacity, but also
maximizes the utilization of computing resources. Some of the largest public cloud
players are Amazon (AWS), Microsoft (Azure), Alphabet (Google Cloud), Alibaba
and Tencent.
57
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Source: Intel.
TensorFlow
Originally developed by Google Brain Team, TensorFlow is an open-source software
library for numerical computation like machine learning. It can be imagined as a
framework to aid users in constructing their own machine learning model, which
helps to lessen the time in development and deployment.
AI applications
Computer vision
There are two main hurdles for machines to adopt computer vision: (1) digitizing a
huge amount of analog signals in almost real-time; and (2) reading through the
dataset to identify the objects. While the first bottleneck has been resolved by the
advancement of digital cameras / smartphone cameras, the second obstacle has only
been recently resolved with developments in deep learning. The most well-known
breakthrough took place in 2012, when Google’s AI network learned how to identify
cats from videos. Since then, various companies have been active in advancing
computer vision and extended its usage to facial recognition.
ADAS/autonomous driving
Simply put, autonomous driving refers to vehicles that can drive themselves with
little or even no human interactions. With regards to the level of human involvement,
National Highway Traffic Safety Administration in the U.S defined four stages from
assisted to full autonomous driving:
58
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Hardware
CPU (Central Processing Unit)
Central Processing Unit (CPU) can be thought of as the brain of a computer, as it
processes and executes computing instructions. Depending on the use case, it could
be integrated with other components such as memory to form System on Chip (SoC).
To further enhance the performance, two or more CPU cores (multi-cores) are
sometimes included on the same chip. However, it is important to note that this is
different from heterogeneous / accelerated computing, as the latter often involves the
use of two or more different architectures. Of note, Intel is the major CPU vendor
(vs. AMD) now.
59
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Moore’s Law
The term “Moore’s Law” originates from an observation of Gordon Moore, founder
of Intel – as he noted that the number of transistors on a chip usually doubles every
two years. This observation has profound implications to IC performance, energy
efficacy and cost – as shrunk transistor size can help to speed up electron movement,
reduce current leakage and translate into more die output per wafer. While the claim
seems to have held true over past many decades, discussions on its potential death
have emerged in recent years (see our previous section on “More demand at the
bleeding edge of Moore’s Law”).
60
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Companies Discussed in This Report (all prices in this report as of market close on 06 February 2018, unless otherwise
indicated)
ASM Pacific Technology Limited (0522.HK/HK$101.80[07 February 2018]/Overweight), Advantest (6857)
(6857.T/¥2123[07 February 2018]/Overweight), Applied Materials (AMAT/$50.25/Overweight), Hitachi High-
Technologies (8036) (8036.T/¥4815[07 February 2018]/Overweight), Inphi (IPHI/$32.09/Overweight), Intel
(INTC/$44.91/Overweight), KLA-Tencor (KLAC/$105.23/Overweight), Lam Research (LRCX/$178.34/Overweight),
Mellanox Technologies (MLNX/$61.90/Neutral), Micron Technology (MU/$43.88/Overweight), NVIDIA Corporation
(NVDA/$225.58/Neutral), SCREEN Holdings (7735) (7735.T/¥8400[07 February 2018]/Neutral), SK hynix
(000660.KS/W71100[07 February 2018]/Neutral), Samsung Electronics (005930.KS/W2290000[07 February
2018]/Overweight), TSMC (2330.TW/NT$240.0[07 February 2018]/Overweight), Tokyo Electron (8035)
(8035.T/¥18760[07 February 2018]/Neutral), Xilinx (XLNX/$68.99/Neutral)
Analyst Certification: The research analyst(s) denoted by an “AC” on the cover of this report certifies (or, where multiple research
analysts are primarily responsible for this report, the research analyst denoted by an “AC” on the cover or within the document
individually certifies, with respect to each security or issuer that the research analyst covers in this research) that: (1) all of the views
expressed in this report accurately reflect his or her personal views about any and all of the subject securities or issuers; and (2) no part of
any of the research analyst's compensation was, is, or will be directly or indirectly related to the specific recommendations or views
expressed by the research analyst(s) in this report. For all Korea-based research analysts listed on the front cover, they also certify, as per
KOFIA requirements, that their analysis was made in good faith and that the views reflect their own opinion, without undue influence or
intervention.
Research excerpts: This note includes excerpts from previously published research. For access to the full reports, including analyst
certification and important disclosures, investment thesis, valuation methodology, and risks to rating and price targets, please contact your
salesperson or the covering analyst’s team or visit www.jpmorganmarkets.com.
Important Disclosures
Market Maker/ Liquidity Provider: J.P. Morgan Securities plc and/or an affiliate is a market maker and/or liquidity provider in
securities issued by Tokyo Electron (8035), Hitachi High-Technologies (8036), Advantest (6857), SCREEN Holdings (7735), Samsung
Electronics, SK hynix.
Client: J.P. Morgan currently has, or had within the past 12 months, the following entity(ies) as clients: Tokyo Electron (8035),
Hitachi High-Technologies (8036), Advantest (6857), Samsung Electronics.
Client/Investment Banking: J.P. Morgan currently has, or had within the past 12 months, the following entity(ies) as investment
banking clients: Hitachi High-Technologies (8036), Samsung Electronics.
Client/Non-Investment Banking, Securities-Related: J.P. Morgan currently has, or had within the past 12 months, the following
entity(ies) as clients, and the services provided were non-investment-banking, securities-related: Tokyo Electron (8035), Hitachi High-
Technologies (8036), Advantest (6857), Samsung Electronics.
Client/Non-Securities-Related: J.P. Morgan currently has, or had within the past 12 months, the following entity(ies) as clients, and
the services provided were non-securities-related: Tokyo Electron (8035), Hitachi High-Technologies (8036), Samsung Electronics.
Investment Banking (past 12 months): J.P. Morgan received in the past 12 months compensation for investment banking services
from Hitachi High-Technologies (8036), Samsung Electronics.
Investment Banking (next 3 months): J.P. Morgan expects to receive, or intends to seek, compensation for investment banking
services in the next three months from Hitachi High-Technologies (8036), Samsung Electronics.
Non-Investment Banking Compensation: J.P. Morgan has received compensation in the past 12 months for products or services
other than investment banking from Tokyo Electron (8035), Hitachi High-Technologies (8036), Advantest (6857), Samsung Electronics.
Other Significant Financial Interests: J.P. Morgan owns a position of 1 million USD or more in the debt securities of Tokyo
Electron (8035), Hitachi High-Technologies (8036), Advantest (6857), SCREEN Holdings (7735), Samsung Electronics, SK hynix.
Gartner: All statements in this report attributable to Gartner represent J.P. Morgan's interpretation of data opinion or viewpoints
published as part of a syndicated subscription service by Gartner, Inc., and have not been reviewed by Gartner. Each Gartner publication
speaks as of its original publication date (and not as of the date of this report). The opinions expressed in Gartner publications are not
representations of fact, and are subject to change without notice.
Company-Specific Disclosures: Important disclosures, including price charts and credit opinion history tables, are available for
compendium reports and all J.P. Morgan–covered companies by visiting https://www.jpmm.com/research/disclosures, calling 1-800-477-
0406, or e-mailing research.disclosure.inquiries@jpmorgan.com with your request. J.P. Morgan’s Strategy, Technical, and Quantitative
61
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Research teams may screen companies not covered by J.P. Morgan. For important disclosures for these companies, please call 1-800-477-
0406 or e-mail research.disclosure.inquiries@jpmorgan.com.
Explanation of Equity Research Ratings, Designations and Analyst(s) Coverage Universe:
J.P. Morgan uses the following rating system: Overweight [Over the next six to twelve months, we expect this stock will outperform the
average total return of the stocks in the analyst’s (or the analyst’s team’s) coverage universe.] Neutral [Over the next six to twelve
months, we expect this stock will perform in line with the average total return of the stocks in the analyst’s (or the analyst’s team’s)
coverage universe.] Underweight [Over the next six to twelve months, we expect this stock will underperform the average total return of
the stocks in the analyst’s (or the analyst’s team’s) coverage universe.] Not Rated (NR): J.P. Morgan has removed the rating and, if
applicable, the price target, for this stock because of either a lack of a sufficient fundamental basis or for legal, regulatory or policy
reasons. The previous rating and, if applicable, the price target, no longer should be relied upon. An NR designation is not a
recommendation or a rating. In our Asia (ex-Australia and ex-India) and U.K. small- and mid-cap equity research, each stock’s expected
total return is compared to the expected total return of a benchmark country market index, not to those analysts’ coverage universe. If it
does not appear in the Important Disclosures section of this report, the certifying analyst’s coverage universe can be found on J.P.
Morgan’s research website, www.jpmorganmarkets.com.
Coverage Universe: Hariharan, Gokul: ASE (2311.TW), ASUSTek Computer (2357.TW), Compal Electronics, Inc. (2324.TW), Delta
Electronics, Inc. (2308.TW), GDS Holdings (GDS), Hangzhou HikVision Digital Technology Co., Ltd - A (002415.SZ), Hon Hai
Precision (2317.TW), Lenovo Group Limited (0992.HK), MediaTek Inc. (2454.TW), Pegatron Corp (4938.TW), Quanta Computer Inc.
(2382.TW), SMIC (0981.HK), SPIL (2325.TW), TSMC (2330.TW), UMC (2303.TW), Wistron Corporation (3231.TW), Zhejiang Dahua
Technology Co., Ltd - A (002236.SZ)
Park, JJ: LG Electronics (066570.KS), Panasonic (6752) (6752.T), SK hynix (000660.KS), Samsung Electronics (005930.KS), Sony
(6758) (6758.T)
Moriyama, Hisashi: Advantest (6857) (6857.T), Canon (7751) (7751.T), Disco (6146) (6146.T), FUJIFILM Holdings (4901) (4901.T),
Hitachi (6501) (6501.T), Hitachi High-Technologies (8036) (8036.T), Konica Minolta (4902) (4902.T), Mitsubishi Electric (6503)
(6503.T), Nikon (7731) (7731.T), Ricoh (7752) (7752.T), SCREEN Holdings (7735) (7735.T), Seiko Epson (6724) (6724.T), Tokyo
Electron (8035) (8035.T), Tokyo Seimitsu (7729) (7729.T), Toshiba (6502) (6502.T), ULVAC (6728) (6728.T)
Sur, Harlan: Advanced Micro Devices (AMD), Analog Devices (ADI), Applied Materials (AMAT), Broadcom Limited (AVGO),
Cavium Inc (CAVM), Cypress Semiconductor (CY), Inphi (IPHI), Integrated Device Technology (IDTI), Intel (INTC), KLA-Tencor
(KLAC), Lam Research (LRCX), MACOM (MTSI), Marvell Technology Group (MRVL), Maxim Integrated Products (MXIM),
Mellanox Technologies (MLNX), Microchip Technology (MCHP), Micron Technology (MU), NVIDIA Corporation (NVDA), NXP
Semiconductors (NXPI), ON Semiconductor Corporation (ON), Orbotech (ORBK), Texas Instruments (TXN), Vishay Intertechnology
(VSH), Xilinx (XLNX)
Deshpande, Sandeep S: ASM International (ASMI.AS), ASML (ASML.AS), ASML ADR (ASML), Dialog Semiconductor (DLGS.DE),
Ericsson (ERICb.ST), Ericsson ADR (ERIC), Gemalto (GTO.AS), Infineon Technologies (IFXGn.F), Ingenico (INGC.PA), Nets
(NETS.CO), Nokia (NOKIA.HE), Nokia ADR (NOK), OSRAM (OSRn.DE), STMicroelectronics (STM.PA), VAT (VACN.S), ams
(AMS.S)
Equity Valuation and Risks: For valuation methodology and risks associated with covered companies or price targets for covered
companies, please see the most recent company-specific research report at http://www.jpmorganmarkets.com, contact the primary analyst
or your J.P. Morgan representative, or email research.disclosure.inquiries@jpmorgan.com. For material information about the proprietary
models used, please see the Summary of Financials in company-specific research reports and the Company Tearsheets, which are
available to download on the company pages of our client website, http://www.jpmorganmarkets.com. This report also sets out within it
the material underlying assumptions used.
62
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Equity Analysts' Compensation: The equity research analysts responsible for the preparation of this report receive compensation based
upon various factors, including the quality and accuracy of research, client feedback, competitive factors, and overall firm revenues.
Registration of non-US Analysts: Unless otherwise noted, the non-US analysts listed on the front of this report are employees of non-US
affiliates of JPMS, are not registered/qualified as research analysts under NASD/NYSE rules, may not be associated persons of JPMS,
and may not be subject to FINRA Rule 2241 restrictions on communications with covered companies, public appearances, and trading
securities held by a research analyst account.
Other Disclosures
J.P. Morgan ("JPM") is the global brand name for J.P. Morgan Securities LLC ("JPMS") and its affiliates worldwide. J.P. Morgan Cazenove is a marketing
name for the U.K. investment banking businesses and EMEA cash equities and equity research businesses of JPMorgan Chase & Co. and its subsidiaries.
All research reports made available to clients are simultaneously available on our client website, J.P. Morgan Markets. Not all research content is
redistributed, e-mailed or made available to third-party aggregators. For all research reports available on a particular stock, please contact your sales
representative.
Options related research: If the information contained herein regards options related research, such information is available only to persons who have
received the proper option risk disclosure documents. For a copy of the Option Clearing Corporation's Characteristics and Risks of Standardized Options,
please contact your J.P. Morgan Representative or visit the OCC's website at https://www.theocc.com/components/docs/riskstoc.pdf
63
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..
Gokul Hariharan Asia Pacific Equity Research
(852) 2800-8564 07 February 2018
gokul.hariharan@jpmorgan.com
Investment research issued by JPMS plc has been prepared in accordance with JPMS plc's policies for managing conflicts of interest arising as a result of
publication and distribution of investment research. Many European regulators require a firm to establish, implement and maintain such a policy. Further
information about J.P. Morgan's conflict of interest policy and a description of the effective internal organisations and administrative arrangements set up
for the prevention and avoidance of conflicts of interest is set out at the following link https://www.jpmorgan.com/jpmpdf/1320742677360.pdf. This report
has been issued in the U.K. only to persons of a kind described in Article 19 (5), 38, 47 and 49 of the Financial Services and Markets Act 2000 (Financial
Promotion) Order 2005 (all such persons being referred to as "relevant persons"). This document must not be acted on or relied on by persons who are not
relevant persons. Any investment or investment activity to which this document relates is only available to relevant persons and will be engaged in only
with relevant persons. In other EEA countries, the report has been issued to persons regarded as professional investors (or equivalent) in their home
jurisdiction. Australia: This material is issued and distributed by JPMSAL in Australia to "wholesale clients" only. This material does not take into
account the specific investment objectives, financial situation or particular needs of the recipient. The recipient of this material must not distribute it to any
third party or outside Australia without the prior written consent of JPMSAL. For the purposes of this paragraph the term "wholesale client" has the
meaning given in section 761G of the Corporations Act 2001. Germany: This material is distributed in Germany by J.P. Morgan Securities plc, Frankfurt
Branch which is regulated by the Bundesanstalt für Finanzdienstleistungsaufsicht. Hong Kong: The 1% ownership disclosure as of the previous month end
satisfies the requirements under Paragraph 16.5(a) of the Hong Kong Code of Conduct for Persons Licensed by or Registered with the Securities and
Futures Commission. (For research published within the first ten days of the month, the disclosure may be based on the month end data from two months
prior.) J.P. Morgan Broking (Hong Kong) Limited is the liquidity provider/market maker for derivative warrants, callable bull bear contracts and stock
options listed on the Stock Exchange of Hong Kong Limited. An updated list can be found on HKEx website: http://www.hkex.com.hk. Korea: This report
may have been edited or contributed to from time to time by affiliates of J.P. Morgan Securities (Far East) Limited, Seoul Branch. Singapore: As at the
date of this report, JPMSS is a designated market maker for certain structured warrants listed on the Singapore Exchange where the underlying securities
may be the securities discussed in this report. Arising from its role as designated market maker for such structured warrants, JPMSS may conduct hedging
activities in respect of such underlying securities and hold or have an interest in such underlying securities as a result. The updated list of structured
warrants for which JPMSS acts as designated market maker may be found on the website of the Singapore Exchange Limited: http://www.sgx.com. In
addition, JPMSS and/or its affiliates may also have an interest or holding in any of the securities discussed in this report – please see the Important
Disclosures section above. For securities where the holding is 1% or greater, the holding may be found in the Important Disclosures section above. For all
other securities mentioned in this report, JPMSS and/or its affiliates may have a holding of less than 1% in such securities and may trade them in ways
different from those discussed in this report. Employees of JPMSS and/or its affiliates not involved in the preparation of this report may have investments
in the securities (or derivatives of such securities) mentioned in this report and may trade them in ways different from those discussed in this report.
Taiwan: This material is issued and distributed in Taiwan by J.P. Morgan Securities (Taiwan) Limited. According to Paragraph 2, Article 7-1 of
Operational Regulations Governing Securities Firms Recommending Trades in Securities to Customers (as amended or supplemented) and/or other
applicable laws or regulations, please note that the recipient of this material is not permitted to engage in any activities in connection with the material
which may give rise to conflicts of interests, unless otherwise disclosed in the “Important Disclosures” in this material. India: For private circulation only,
not for sale. Pakistan: For private circulation only, not for sale. New Zealand: This material is issued and distributed by JPMSAL in New Zealand only to
persons whose principal business is the investment of money or who, in the course of and for the purposes of their business, habitually invest money.
JPMSAL does not issue or distribute this material to members of "the public" as determined in accordance with section 3 of the Securities Act 1978. The
recipient of this material must not distribute it to any third party or outside New Zealand without the prior written consent of JPMSAL. Canada: The
information contained herein is not, and under no circumstances is to be construed as, a prospectus, an advertisement, a public offering, an offer to sell
securities described herein, or solicitation of an offer to buy securities described herein, in Canada or any province or territory thereof. Any offer or sale of
the securities described herein in Canada will be made only under an exemption from the requirements to file a prospectus with the relevant Canadian
securities regulators and only by a dealer properly registered under applicable securities laws or, alternatively, pursuant to an exemption from the dealer
registration requirement in the relevant province or territory of Canada in which such offer or sale is made. The information contained herein is under no
circumstances to be construed as investment advice in any province or territory of Canada and is not tailored to the needs of the recipient. To the extent that
the information contained herein references securities of an issuer incorporated, formed or created under the laws of Canada or a province or territory of
Canada, any trades in such securities must be conducted through a dealer registered in Canada. No securities commission or similar regulatory authority in
Canada has reviewed or in any way passed judgment upon these materials, the information contained herein or the merits of the securities described herein,
and any representation to the contrary is an offence. Dubai: This report has been issued to persons regarded as professional clients as defined under the
DFSA rules. Brazil: Ombudsman J.P. Morgan: 0800-7700847 / ouvidoria.jp.morgan@jpmorgan.com.
General: Additional information is available upon request. Information has been obtained from sources believed to be reliable but JPMorgan Chase & Co.
or its affiliates and/or subsidiaries (collectively J.P. Morgan) do not warrant its completeness or accuracy except with respect to any disclosures relative to
JPMS and/or its affiliates and the analyst's involvement with the issuer that is the subject of the research. All pricing is indicative as of the close of market
for the securities discussed, unless otherwise stated. Opinions and estimates constitute our judgment as of the date of this material and are subject to change
without notice. Past performance is not indicative of future results. This material is not intended as an offer or solicitation for the purchase or sale of any
financial instrument. The opinions and recommendations herein do not take into account individual client circumstances, objectives, or needs and are not
intended as recommendations of particular securities, financial instruments or strategies to particular clients. The recipient of this report must make its own
independent decisions regarding any securities or financial instruments mentioned herein. JPMS distributes in the U.S. research published by non-U.S.
affiliates and accepts responsibility for its contents. Periodic updates may be provided on companies/industries based on company specific developments or
announcements, market conditions or any other publicly available information. Clients should contact analysts and execute transactions through a J.P.
Morgan subsidiary or affiliate in their home jurisdiction unless governing law permits otherwise.
64
This document is being provided for the exclusive use of Steven Cock at ABN AMRO GROUP N.V..