原文：IBM began their AI push early on, and they dominate enterprise-class AI with their Watson offering.
Initially targeted at the healthcare segment and focused on making a difficult diagnosis, I still recall their first public validation test. They took a problem that had stumped doctors for years regarding a woman who had strange symptoms and an undiagnosed painful disease. In 15 minutes, the system identified both the cause and the likely cure for her problem.
I've seen the machine compete successfully on Jeopardy and, more recently, do a pretty impressive job with a live debate. While it lost that competition, I think the judge was biased toward the human; it was funnier and more entertaining than the human was. Ironically, it might have been the humor that got Watson into trouble. But it did give me a sense of what a future digital assistant might be able to accomplish.
Granted, at one time I thought IBM's alliance with Apple might lead to an impressive Watson back end for Siri. However, with Apple still being one of the leading "not invented here" companies, unfortunately that never happened. We are still dealing with personal AIs with capabilities well below what we might have were there a Watson machine behind our personal digital assistants now.
IBM has five labs all over the world working on AI advancement, and they are developing unique processors and memory systems to increase the speed and intelligence of their efforts. They reported that their average performance per watt increase is currently around 2.5x per year. This performance improvement rate is well ahead of the industry standard Morre's Law, which has performance doubling every two years. IBM is more than doubling that.
This sustained vast improvement has also forced them to rethink component interconnect as well as create a new memory type. This memory type has elements that are surprisingly similar to Intel's Optane product, but with what appears to be far higher density and performance more in line with IBM's AI efforts. This suggests that the technology could, if it hasn't already, make it into IBM's mainstream servers and storage products, much like Intel's Optane has.
One of IBM's most impressive efforts is the development of algorithms that allowed them to reduce precision while having little effect on accuracy. They started with a 32-bit process and were able to reduce the precision to a 2-bit process. This reduction significantly reduces the size and power requirements of both the training and inference systems that result, and this effort could have an impact on other server loads as well.