Last year Google released TensorFlow, its in-house machine learning framework, for free. TensorFlow is a very powerful framework, and it’s already rapidly surpassing all other existing community-developed frameworks for ML. At this point it would be a rational projection that in a couple of years non-TensorFlow ML will be relegated to a niche role.
I’ve used TensorFlow and I like it a lot. It’s really a step up compared to what we had before.
The skeptic mind would ask why Google, a multi-billion dollar publicly traded company – would release one of its most important assets for anyone to use. Could it be user lock-in, as for example Microsoft tries to frequently do with its products? Well, no, because TensorFlow is open source, and Google makes no money (at least not directly) off of people using it. If you’re familiar with ML, though, you probably know why Google released it for free: they know you can’t compete with Google by using it.
Why? Because the current state of ML research means that models and code really aren’t that important. Any programmer with limited experience in ML could read ML papers and code their own ML frameworks. Indeed, that’s what I did in my spare time just a couple of years ago. It really wasn’t that hard.
What is really important, instead, is compute and data. That is, having access to a large set of data to train your algorithms on, and having the sheer computing power necessary to run your models.
Compute and data are the lifeblood of machine learning, and modern ML has a voracious appetite for both. Long gone are the old days of ML, when advances were made by postgrads coming up with optimal, hand-tuned models and running code on their own CPUs, and everything was neat and tidy and small and cute. Nowadays ML is driven by enormous clusters of computers, incredibly expensive and comprehensive collections of data that are closely guarded by various tech companies, perhaps more closely guarded and deemed more valuable than any other assets the companies have.
If anything, the more ML researchers use TF, the better it is for Google, because it streamlines and lubricates the process of feeding back researcher-developed ideas into Google’s own production.
In the ancient world, gold and silver and grain and salt were the major currencies; they were the commodities that drove the world economy. In the 20th century, it was oil. Today, it is data. And much like the oil barons of old, today’s tech companies found themselves, partly by design and partly by accident – in possession of a commodity that suddenly and dramatically increased in importance, making them rich beyond their wildest dreams. And just like the oil barons, they are scrambling to consolidate their holdings and their capital and force smaller players out of the market. Why else was Instagram – a web service for ‘retouching’ photos – bought for $1 bn dollars?
So, in this kind of ecosystem, is there any place for the intrepid newcomer to come in and actually make a meaningful impact? Can someone with no access to capital (which now means data) and no access to labor (which now means computing power) actually compete?
There’s been a lot of negativity recently in the ML community that without access to these resources, it’s hopeless. For example, Neil Lawrence (a professor of Machine Learning at Sheffield) has said that we need to ‘democratize’ AI by coming up with ML methods that do not require the same amount of resources. That approach may or may not work, and there are some fundamental reasons it may not. But regardless, the bigger point he wants to make is that the small guys can’t compete with the big guys anymore. And to some extent this is true – if you try to do exactly what the big guys are doing. The obvious subtext here is to find things that the big companies are not doing.
Today’s large tech companies take the crude data that gushes in, collect it, refine it into ML software and trained models, and sell it in the form of distillated products like advertising, web services, analytics, and so on. But the problem with these large companies is that they are slow to move and innovation (risk-taking) is hard for them. This is the prime reason Google split off from their R&D sections and formed Alphabet, but whether this can actually solve the problem remains to be seen. Even when innovation does happen (e.g. Google glass, or its self-driving car), bringing to market is slow, because it is hard for them to let go of their percieved stability and monopoly and commit their resources to risky projects. A beautiful historical parallel is Xerox and its Alto graphical computer, the inspiration for the Apple Macintosh (and probably all other major graphical user interfaces). Xerox itself struggled to make money off of Alto, but a couple of now well-known young hackers took their idea and made a fortune.
Comma.ai is a new startup company that is aiming to sell self-driving car kits for $1000 that anyone can add to their own car. The CEO is a bit eccentric, and the company may not work out in the end, but I am 100% sure that some similar idea is going to make it to market soon.
Get Around the Problem
DeepMind itself, now owned by Google, is actually a great example of a newcomer company making a huge splash. In their case, they trained a reinforcement learner to play Atari games in a now-famous demo. They got around the problem of having access to data by using a simulated game environment where they could generate as much data as they wanted! With the problem of data solved, they just had to solve the compute problem, and that they did by using the resources of the University of Toronto. Another example is Cylance: a company that tries to detect malware using deep learning techniques; data is easy to come by here.
Other companies use things like web scraping and so on to obtain large troves of data from the internet. The downside to this is that processing raw data from the internet is usually more compute-intensive than curating your own data, so you might wind up trading off data for compute.
So in summary, the best advice to newcomers in the field would be to stop copying other people and look for new, original paths to take. It might be somewhat obvious advice, but there it is.