Baidu-Quest For Accelerating Image Search In The FPGA Mold

Baidu Before going all out to know and learn about Baidu, it is very important to spare some time and shed some thoughts on the concerned gamut, cybernetic market and customer base that lead to this context. It acts a valuable backdrop to the fact that the famous Chinese search engine called Baidu is trying to step up the velocity of its ingrained learning mechanism by incorporating programmable gate arrays in the set field. You call them FPGAs. The primary aim is to accelerate the performance and standard of its deep, distinct learning models catering to image search.

The basic preface

Microsoft recently reported that a novel Catapult board, which runs on a specialized reprogrammable chip, that helps in dramatically improving the standard of Bing. The results are encouraging and it envisions to integrate the technology in the concerned facility or data center within 2015. Baidu, which is the largest and a noteworthy titan in China, hold similar results and views. Both companies have put forward papers at the conference that are dedicated to defining and describing enhancements in auxiliary technologies and microprocessors. Graphics has now become accepted components of game consoles and PCs.

Baidu

The second perspective

Years ago, both chips and accelerators performed a set of functions efficiently. Microprocessors are also referred as general-purpose or customary chips. They are programmed to cater a wide range of tasks. On the other hand, you have gate arrays that are field-programmable, which sit in between the ambit. They propel limited programmability at a much reduced performance as compared to a fixed-operation chip. The aim of this was and is add vigor enhance longevity of machines of previous versions and has a focus on flexibility. Flexibility is certainly critical to enhancing and accelerating any software algorithm. Constantly improved and tweaked upon, Search is a prime example in this regard.

The Baidu mold

Baidu owns a gamut of domains and servers across China. They used to incorporate FPGAs to bolster and fuel neural, deep networks and algorithms. They catered to everything from image search, speech recognition, and cohesive recognition to traditional search. It used to implement a board or placement unit with a board of specification K7 480T-2L FPGA. You could plug it to either 2U or 1U server. If you make a thorough study of its network, you will find that the company has discovered under different workloads that FPGA boards are actually several times accurate and more effective than a GPU or CPU.

The company drive

The new quest to speed up and improve its search mechanism through a new and improved FPGA fold goes in parity with boosting the functionality of Baidu’s convoluted networks. The best part is you do not need to tread the whole nine yards of the GPU route for doing so. In all probability, FPGAs are applicable to every production data gamut or center. You can pair them with prevalent CPUs for serving queries. You can let the GPUs power things from behind. The primary focus is to enhance the training and nourishment of the learning models.

Image credits to : keso s