Heterogeneous Accelerated Deep Learning on Spark

Finally. My paper report for my Statistical Machine Learning class. main

Abstract:

As a branch of machine learning, Deep Learning is already becoming a promising approach for developing more sophisticated models for different intelligent applications. Recent deep learning activities such as image recognition, speech categorization, and automatic machine translation have shown significant accomplishments in training the models. But the enormous computation and data input are still a huge burden to a single machine. Recent deep learning framework Tensorflow released a distributed version, but it still cannot provide quality of service (QoS) or utilize integrated accelerators efficiently such as GPU and FPGA to accelerate the computation.

In this work, we designed and implemented a distributed Deep Learning library based on the big data framework Spark, which is widely adopted in the distribution system area. We also want to integrate the accelerators such as GPU and FPGA to show how complex neural networks applications can be speed up. In the evaluation part, we show that the distributed deep learning library improve the performance 1.97 times with two nodes and 2.93 times with three nodes.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s