Two startups that include Syntiant which is based in California and Mythic which is based in Texas seem to be on the same page here, in case of using the flash memory. Both the startups believe that they can use the embedded flash memory in order to greatly reduce the large amount of power that is actually needed to perform some deep-learning computations. And as it seems, both of them can be right in this case. Many companies are now planning on delivering chips to accelerate the deep learning applications which are inherently difficult and involve a significant amount of effort.
Since all these solutions are created because of the shape of the problem in the picture, it is of no surprise that they are similar to a certain degree, as per Dave Fick, who is the founder and also works as the CTO at Mythic.
This problem is actually shaped like some sort of traffic jam of data, when it is executed in a CPU. And the major energy expenditure as far as deep learning is concerned is the ability to move the weights which denote the strength of the connections that are formed in a neural network, in order to represent the same digitally at the right time and in the right place.
Kurt Busch, CEO at Syntiant, said that their company’s approach is about eliminating the memory power penalties along with the memory bandwidth by doing the computation in the memory. The company Mythic is also working on a similar strategy for the accomplishment of a similar result.
But while the companies appear to be completely on the same page, it seems that there are some major differences as well. One of the major ones is the difference between the target customers of the companies along with the difference in the applications.