One of the poorest-kept secrets in Silicon Valley has been the huge salaries and bonuses that experts in artificial intelligence can command. Now, a little-noticed tax filing by a research lab called OpenAI has made some of those eye-popping figures public.
OpenAI paid its top researcher, Ilya Sutskever (shown above), more than $1.9 million in 2016. It paid another leading researcher, Ian Goodfellow, more than $800,000 — even though he was not hired until March of that year. Both were recruited from Google.
A third big name in the field, the roboticist Pieter Abbeel, made $425,000, though he did not join until June 2016, after taking a leave from his job as a professor at the University of California, Berkeley. Those figures all include signing bonuses.
The figures listed on the tax forms, which OpenAI is required to release publicly because it is a nonprofit, provide new insight into what organizations around the world are paying for A.I. talent. But there is a caveat: The compensation at OpenAI may be underselling what these researchers can make, since as a nonprofit it can’t offer stock options.
Salaries for top A.I. researchers have skyrocketed because there are not many people who understand the technology and thousands of companies want to work with it. Element AI, an independent lab in Canada, estimates that 22,000 people worldwide have the skills needed to do serious A.I. research — about double from a year ago.
“There is a mountain of demand and a trickle of supply,” said Chris Nicholson, the chief executive and founder of Skymind, a start-up working on A.I.
That raises significant issues for universities and governments. They also need A.I. expertise, both to teach the next generation of researchers and to put these technologies into practice in everything from the military to drug discovery. But they could never match the salaries being paid in the private sector.
In 2015, Elon Musk, the chief executive of the electric-car maker Tesla, and other well-known figures in the tech industry created OpenAI and moved it into offices just north of Silicon Valley in San Francisco. They recruited several researchers with experience at Google and Facebook, two of the companies leading an industrywide push into artificial intelligence.
In addition to salaries and signing bonuses, the internet giants typically compensate employees with sizable stock options — something that OpenAI does not do. But it has a recruiting message that appeals to idealists: It will share much of its work with the outside world, and it will consciously avoid creating technology that could be a danger to people.
“I turned down offers for multiple times the dollar amount I accepted at OpenAI,” Mr. Sutskever said. “Others did the same.” He said he expected salaries at OpenAI to increase as the organization pursued its “mission of ensuring powerful A.I. benefits all of humanity.”
OpenAI spent about $11 million in its first year, with more than $7 million going to salaries and other employee benefits. It employed 52 people in 2016.
People who work at major tech companies or have entertained job offers from them have told The New York Times that A.I. specialists with little or no industry experience can make between $300,000 and $500,000 a year in salary and stock. Top names can receive compensation packages that extend into the millions.
“The amount of money was borderline crazy,” Wojciech Zaremba, a researcher who joined OpenAI after internships at Google and Facebook, told Wired. While he would not reveal exact numbers, Mr. Zaremba said big tech companies were offering him two or three times what he believed his real market value was.
At DeepMind, a London A.I. lab now owned by Google, costs for 400 employees totaled $138 million in 2016, according to the company’s annual financial filings in Britain. That translates to $345,000 per employee, including researchers and other staff.
Researchers like Mr. Sutskever specialize in what are called neural networks, complex algorithms that learn tasks by analyzing vast amounts of data. They are used in everything from digital assistants in smartphones to self-driving cars.
Some researchers may command higher pay because their names carry weight across the A.I. community and they can help recruit other researchers.
Mr. Sutskever was part of a three-researcher team at the University of Toronto that created key so-called computer vision technology. Mr. Goodfellow invented a technique that allows machines to create fake digital photos that are nearly indistinguishable from the real thing.
“When you hire a star, you are not just hiring a star,” Mr. Nicholson of the start-up Skymind said. “You are hiring everyone they attract. And you are paying for all the publicity they will attract.”
Other researchers at OpenAI, including Greg Brockman, who leads the lab alongside Mr. Sutskever, did not receive such high salaries during the lab’s first year.
In 2016, according to the tax forms, Mr. Brockman, who had served as chief technology officer at the financial technology start-up Stripe, made $175,000. As one of the founders of the organization, however, he most likely took a salary below market value. Two other researchers with more experience in the field — though still very young — made between $275,000 and $300,000 in salary alone in 2016, according to the forms.
Though the pool of available A.I. researchers is growing, it is not growing fast enough. “If anything, demand for that talent is growing faster than the supply of new researchers, because A.I. is moving from early adopters to wider use,” Mr. Nicholson said.
That means it can be hard for companies to hold on to their talent. Last year, after only 11 months at OpenAI, Mr. Goodfellow returned to Google. Mr. Abbeel and two other researchers left the lab to create a robotics start-up, Embodied Intelligence. (Mr. Abbeel has since signed back on as a part-time adviser to OpenAI.) And another researcher, Andrej Karpathy, left to become the head of A.I. at Tesla, which is also building autonomous driving technology.
In essence, Mr. Musk was poaching his own talent. Since then, he has stepped down from the OpenAI board, with the lab saying this would allow him to “eliminate a potential future conflict.”
Read the source article at The New York Times (via CNBC).