Accenture人工智能高级经理，Rumman Chowdhury博士预计，随着人工智能继续在地缘政治中发挥作用，该技术将受到更多监管；谷歌Brain联合创始人、百度前首席科学家Andrew Ng表示，软件行业以外的企业将会开发出许多人工智能应用程序；Facebook首席人工智能科学家、教授Yann LeCun表示，他希望看到人工智能的进展，这种人工智能能够识别事件之间的因果关系；Cloudera的机器学习负责人Hilary Mason预计今年该行业将出现更多的问责制和道德考量。
Dr. Rumman Chowdhury
In the year ahead, Chowdhury expects to see more government scrutiny and regulation of tech around the world.
“AI and the power that is wielded by the global tech giants raises a lot of questions about how to regulate the industry and the technology,” she said. “In 2019, we will have to start coming up with the answers to these questions — how do you regulate a technology when it is a multipurpose tool with context-specific outcomes? How do you create regulation that doesn’t stifle innovation or favor large companies (who can absorb the cost of compliance) over small startups? At what level do we regulate? International? National? Local?”
She also expects to see the continued evolution of AI’s role in geopolitical matters.
“This is more than a technology, it is an economy- and society-shaper. We reflect, scale, and enforce our values in this technology, and our industry needs to be less naive about the implications of what we build and how we build it,” she said. For this to happen, she believes people need to move beyond the idea common in the AI industry that if we don’t build it, China will, as if creation alone is where power lies.
“I hope regulators, technologists, and researchers realize that our AI race is about more than just compute power and technical acumen, just like the Cold War was about more than nuclear capabilities,” she said. “We hold the responsibility of recreating the world in a way that is more just, more fair, and more equitable while we have the rare opportunity to do so. This moment in time is fleeting; let’s not squander it.”
On a consumer level, she believes 2019 will see more use of AI in the home. Many people have become much more accustomed to using smart speakers like Google Home and Amazon Echo, as well as a host of smart devices. On this front, she’s curious to see if anything especially interesting emerges from the Consumer Electronics Show — set to kick off in Las Vegas in the second week of January — that might further integrate artificial intelligence into people’s daily lives.
“I think we’re all waiting for a robot butler,” she said.
In the year ahead, Ng is excited to see progress in two specific areas in AI/ML research that help advance the field as a whole. One is AI that can arrive at accurate conclusions with less data, something called “few shot learning” by some in the field.
“I think the first wave of deep learning progress was mainly big companies with a ton of data training very large neural networks, right? So if you want to build a speech recognition system, train it on 100,000 hours of data.
Want to train a machine translation system? Train it on a gazillion pairs of sentences of parallel corpora, and that creates a lot of breakthrough results,” Ng said. “Increasingly I’m seeing results on small data where you want to try to take in results even if you have 1,000 images.”
The other is advances in computer vision referred to as “generalized visibility.” A computer vision system might work great when trained with pristine images from a high-end X-ray machine at Stanford University. And many advanced companies and researchers in the field have created systems that outperform a human radiologist, but they aren’t very nimble.
“But if you take your trained model and you apply it to an X-ray taken from a lower-end X-ray machine or taken from a different hospital, where the images are a bit blurrier and maybe the X-ray technician has the patient slightly turned to their right so the angle’s a little bit off, it turns out that human radiologists are much better at generalizing to this new context than today’s learning algorithms. And so I think interesting research [is on] trying to improve the generalizability of learning algorithms in new domains,” he said.
Like Ng, LeCun wants to see more AI systems capable of the flexibility that can lead to robust AI systems that do not require pristine input data or exact conditions for accurate output.
LeCun said researchers can already manage perception rather well with deep learning but that a missing piece is an understanding of the overall architecture of a complete AI system.
He said that teaching machines to learn through observation of the world will require self-supervised learning, or model-based reinforcement learning.
“Different people give it different names, but essentially human babies and animals learn how the world works by observing and figure out this huge amount of background information about it, and we don’t know how to do this with machines yet, but that’s one of the big challenges,” he said. “The prize for that is essentially making real progress in AI, as well as machines, to have a bit of common sense and virtual assistants that are not frustrating to talk to and have a wider range of topics and discussions.”
For applications that will help internally at Facebook, LeCun said significant progress toward self-supervised learning will be important, as well as AI that requires less data to return accurate results.
“On the way to solving that problem, we’re hoping to find ways to reduce the amount of data that’s necessary for any particular task like machine translation or image recognition or things like this, and we’re already making progress in that direction; we’re already making an impact on the services that are used by Facebook by using weakly supervised or self-supervised learning for translation and image recognition. So those are things that are actually not just long term, they also have very short term consequences,” he said.
In the future, LeCun wants to see progress made toward AI that can establish causal relationships between events. That’s the ability to not just learn by observation, but to have the practical understanding, for example, that if people are using umbrellas, it’s probably raining.
“That would be very important, because if you want a machine to learn models of the world by observation, it has to be able to know what it can influence to change the state of the world and that there are things you can’t do,” he said. “You know if you are in a room and a table is in front of you and there is an object on top of it like a water bottle, you know you can push the water bottle and it’s going to move, but you can’t move the table because it’s big and heavy — things like this related to causality.”
“This is something that since we founded Fast Forward — so, five years ago — we’ve been writing about ethics in every report but this year  people have really started to pick up and pay attention, and I think next year we’ll start to see the consequences or some accountability in the space for companies and for people who pay no attention to this,” Mason said. “What I’m not saying very clearly is that I hope that the practice of data science and AI evolve as such that it becomes the default expectation that both technical folks and business leaders creating products with AI will be accounting for ethics and issues of bias and the development of those products, whereas today it is not the default that anyone thinks about those things.”
As more AI systems become part of business operations in the year ahead, Mason expects that product managers and product leaders will begin to make more contributions on the AI front because they’re in the best position to do so.
“I think it’s clearly the people who have the idea of the whole product in mind and understand the business understand what would be valuable and not valuable, who are in the best position to make these decisions about where they should invest,” she said. “So if you want my prediction, I think in the same way we expect all of those people to be minimally competent using something like spreadsheets to do simple modeling, we will soon expect them to be minimally competent in recognizing where AI opportunities in their own products are.”
The democratization of AI, or expansion to corners of a company beyond data science teams, is something that several companies have emphasized, including Google Cloud AI products like Kubeflow Pipelines and AI Hub as well as advice from the CI&T consultancy to ensure AI systems are actually utilized within a company.
Mason also thinks more and more businesses will need to form structures to manage multiple AI systems.
Like an analogy sometimes used to describe challenges faced by people working in DevOps, Mason said, managing a single system can be done with hand-deployed custom scripts, and cron jobs can manage a few dozen. But when you’re managing tens or hundreds of systems, in an enterprise that has security, governance, and risk requirements, you need professional, robust tooling.
Businesses are shifting from having pockets of competency or even brilliance to having a systematic way to pursue machine learning and AI opportunities, she said.
The emphasis on containers for deploying AI makes sense to Mason, since Cloudera recently launched its own container-based machine learning platform. She believes this trend will continue in years ahead so companies can choose between on-premise AI or AI deployed in the cloud.
Finally, Mason believes the business of AI will continue to evolve, with common practices across the industry, not just within individual companies.
“I think we will see a continuing evolution of the professional practice of AI,” she said. “Right now, if you’re a data scientist or an ML engineer at one company and you move to another company, your job will be completely different: different tooling, different expectations, different reporting structures. I think we’ll see consistency there,” she said.
- End -