Swift for TensorFlow is a next generation platform for deep learning and differentiable programming. A lot of people are excited about it. You should too.
Swift is a programming language Apple introduced in 2014 to replace the aging Objective C. It is the primary language for all Apple's platform, iOS, iPadOS, tvOS, and macOS. This is already half of mobile, and 10% of the PC market. Swift was soon ported to Linux, and now it is a growing choice of server side programming language. The language is powerful, elegant and expressive, and has a huge user base and community.
It is no wonder it was chosen by the TensorFlow team to build next generation machine learning platform.
I saw a huge potential the first moment I heard this project. It is not just for research and backend services. It opens so many possibilities for applications, Apps for Mac and Linux, and in the near future Windows and mobile.
Even at this pre-release stage, while Swift For TensorFlow is still fastly evolving and growing, it is already way ahead other projects in supporting client side application development.
I was painfully wading through Torch, PyTorch, C++, CoreML, Tensorflow C++ API to find a way to package and run a machine learning model inside of an app. Swift for Tensorflow made it easy and straightforward. It was the first project that I felt the urge to contribute back, and I did.
The common approach of serving machine learning models is running inference in the cloud. Mobile phones and PCs usually lack the computing power. Even if they had good GPU, it is hard to config and maintain.
Running a machine learning model totally off the cloud has its benefit. It is much more responsive. Take typing for example, a round trip to the cloud would take seconds. That will make the typing experience horrible. Running locally is also cheaper. Cloud GPU instances cost a lot more than regular instances.
With Swift For TensorFlow, we made AI Editor which runs machine learning inference locally. It can provide faster responses. And more importantly, it can be free. If we had to run a GPU cloud instance to serve those word suggestions, users need to pay a fee to get the service.