Google spoiler chip maps Nvidia and Intel are threatened?

(Original title: Google spoiler chip layout) Qian Tongxin In the early hours of May 18, Beijing time, Google I/O2017 Developer Conference was held. As one of the most important conferences for software developers in the world today, this conference focuses on Android systems, VR/AR, artificial intelligence, driverless, etc. Multiple areas. At the beginning of the conference, Google announced that "we have moved from mobile priority to artificial intelligence first". Unlike previous software that focuses more on software, this year’s most dazzling part of Google’s developer conference is in hardware. Google’s CEO SundarPichai announced that Google officially launched the second generation of TPU (tensor) processors dedicated to artificial intelligence. . This is also in line with Google's AIfirst strategy and becoming a true cloud computing provider. TPU is a high-performance processor independently developed by Google for AI computing services. The latest release of this second-generation TPU chip not only deepens the ability of artificial intelligence in learning and reasoning, but Google is seriously to market it. According to Google's internal tests, the second-generation chip's training speed for machine learning can save half the time of the current GPU on the market. According to Li Feifei, chief scientist of Google Cloud Computing Team and director of the AI ​​Lab at Stanford University, “These TPUs can provide astonishing 128 trillion floating-point operations. They are chips designed to drive machine learning technology.” Although Google expressed the difference between artificial intelligence chips and other chips, this move will threaten the market positions of Intel and Nvidia's major chip suppliers. Both companies' chips have been applied to the field of artificial intelligence. Nvidia is currently the leader in graphics chip GPUs. The company recently released a new generation of GPUs, named Volta, and its performance is comparable to that of Google CloudTPU. The Volta module, which consists of eight such chips, is priced at US$149,000. Nvidia will begin shipping such chips in the third quarter of this year. Gartner analyst Sheng Linghai told the First Financial reporter: “Now from the perspective of machine learning, Nvidia’s GPUs are the most 'violent' and the fastest; if not so fast, Intel can also. But now the chip It has been upgraded to the stage of application judgment. Google's chip claims to have the best performance." Google did not disclose the price of this chip, did not disclose who will go to the production of this chip and when to go public. So far, Google is still buying Intel and Nvidia's chips. As Google increasingly tends to develop its own chip architecture, it will probably save itself billions of dollars in chip spending costs in the future. Google said that this latest artificial intelligence chip will not be sold to data center companies based on other chip maker servers, such as Dell. In addition, in order to allow artificial intelligence developers to better enjoy the performance bonus brought by TPU, Google also officially launched CloudTPU cloud computing service, users can rent. As for the charges, it is also possible to adopt the same charging mode as the original GPU computing service and calculate the price in minutes. "In essence, TPU is a supercomputer for machine learning," said UrsHolzle, Google's chief technology officer. "Our goal is to provide the best cloud services." According to SynergyResearchGroup's data, Google's cloud service growth exceeded last year 80%, but in the public cloud market, Amazon still monopolizes 40% of its share, and its share continues to grow. Google ranks third in the cloud services market. In order to compete with Amazon and Microsoft, Google particularly emphasized the performance of the device. Google said: "A single CloudTPU device is made up of 4 chips, which is 12,000 times faster than IBM's deep blue supercomputer." Google plans to create 1,000 CloudTPU systems for AI researchers willing to publicly share details of their R&D work. stand by. Google's move into the AI-focused hardware and cloud services area is partly driven by its own business acceleration. Google already uses the AlphaGo deep learning system TensorFlow to provide support for search, speech recognition, translation, and graphics processing. Google has announced several AI research projects, including efforts to develop algorithms that can learn how to engage in time-consuming work and fine-tune other machine learning algorithms. At the same time, Google is also developing AI tools for medical image analysis, genome analysis, and molecular discovery. Gartner analyst Sheng Linghai told the First Financial reporter: “The current AI chip is still focused on processing large amounts of data and making judgments. It is based on people’s training. Through the comparison of large amounts of data, it uses mechanical methods. Select the most probable conclusion, but the future imagination is very big."