devarena logo
Reading Time: 4 minutes
This blog is written by Yabin Zheng, Liliya Wu and Mary Bennion.

As the speedy growth of AI technology has been applied to mobile and edge devices, Arm has been playing a key role in the AI domain with significant technology influence. Breakthroughs in mobile processors’ performance have been continuously made by Arm, enabling more deep-learning algorithms to be deployed on mobile devices by engineers.

Baidu is a leading AI company with strong Internet foundation in China. Its open-source platform PaddlePaddle integrates multi-level components to create an efficient, flexible, and scalable deep learning platform. Among its rich products, Paddle Lite is the industry leading high-performance inference engine for endpoints. Paddle-Lite has been continuously developed to improve support capabilities of Arm-based platform.

Paddle and Arm have the shared vision of a mobile hardware ecosystem, and therefore, the two parties have been in a long-term collaboration. In the past few months, Arm Compute Library(ACL) team has deeply engaged with Paddle’s core R&D team to facilitate the improvement of overall performance on Arm Cortex-A CPU and Mali GPU on mobile and edge devices. The collaboration aims to enable better user experience when they use Arm-based hardware as back-end inference engine. Based on the instruction set characteristics of different Arm architectures, the technical exchanges and collaborations cover multiple scenarios of computing and memory access optimization. Combined with the analysis of some key operators in Paddle-Lite and Arm ACL team’s experience, Paddle’s RD team has made the optimization for operator implementation based on multiple dimensions. The optimization includes but not limited to the following aspects:

Uses cases for Arm Cortex-A CPU:

  • Optimizing the instruction re-arrangement for the assembly implementation by taking the mul instruction characteristics of Cortex-A53/A35 into consideration.
  • Implementing various adaptive block strategies in specific calculations based on the difference in the number of registers in different processors
  • Combined with the characteristics of the data to optimize the logic and adjust the calculation strategy for reducing the number of redundant calculations

Uses cases for Arm Mali GPU:

  • Based on the data access characteristics of Mali GPU architecture to realizing the high-efficient access to operators with Buffer-Object
  • Specializing the realization of 1x1Conv and optimizing the multi-threaded calculation logic

Through the previous optimization methods and some general and non-general optimization methods, the Paddle Lite model running on Cortex-A CPU and Mail GPU obtains a very considerable performance improvement. Meanwhile, the accuracy of some models has also been increased. We have measured the performance comparison data of the operator and the model of Paddle Lite before and after optimization in various data dimensions.

With Cortex CPU:

  • Significant performance improvement of operators’ running efficiency.

Figure 1: Operators’ performance improvement on Armv8

 Operators' performance improvement on Armv7

Figure 2: Operators’ performance improvement on Armv7

  • Performance improvement of the typical model.

 Performance of typical model on Armv8

Figure 3: Performance of typical model on Armv8

Performance of typical model on Armv7

Figure 4: Performance of typical model on Armv7

In regards with Mali GPU-based devices, we also initiated the similar testing with the following results.

  • The time consumed for calculation by operators is significantly reduced under different scales.
  • The overall performance of the models across different devices is improved.

 Model performance improvement of Mali-G76(OpenCL) in mate30(990)

Figure 5: Model performance improvement of Mali-G76(OpenCL) in mate30(990)

Model performance improvement of Mali-T860 (OpenCL) in rk3399

Figure 6: Model performance improvement of Mali-T860 (OpenCL) in rk3399

The paddle team is greatly benefited through this collaboration. Paddle-Lite, the mobile inference engine of Paddle, plays a key role in platform supporting inference tasks in Baidu’s mobile applications. After the optimization, it shows an amazing performance improvement in many commercial applications. Take some general visual inspection models (such as long pressing recognization) in mobile phone applications as an example, after this optimization, the model obtained a 22% performance acceleration and 3.4% accuracy improvement. This optimizes the user experience in the mobile application at a huge level. With Paddle Lite’s increasing enhancement of operator amount, running efficiency, and so on, it is possible to realize the deployment of more complex structures and higher performance algorithms and models in mobile devices.

With the rapid development of AI today, Arm and Baidu will look forward to a continued collaboration in shaping the future of AI.

Learn more about Arm Compute Library
Learn more about Paddle Lite

Source link

Spread the Word!