A Look Inside Facebook’s AI Machine

By Steven Levy, Wired When asked to head Facebook’s Applied Machine Learning group — to supercharge the world’s biggest social network with an AI makeover — Joaquin Quiñonero Candela hesitated. It was not that the Spanish-born scientist, a self-described “machine learning (ML) person,” hadn’t already witnessed how AI could help Facebook. Since joining the company in […]

By Steven Levy, Wired

When asked to head Facebook’s Applied Machine Learning group — to supercharge the world’s biggest social network with an AI makeover — Joaquin Quiñonero Candela hesitated. It was not that the Spanish-born scientist, a self-described “machine learning (ML) person,” hadn’t already witnessed how AI could help Facebook. Since joining the company in 2012, he had overseen a transformation of the company’s ad operation, using an ML approach to make sponsored posts more relevant and effective. Significantly, he did this in a way that empowered engineers in his group to use AI even if they weren’t trained to do so, making the ad division richer overall in machine learning skills. But he wasn’t sure the same magic would take hold in the larger arena of Facebook, where billions of people-to-people connections depend on fuzzier values than the hard data that measures ads. “I wanted to be convinced that there was going to be value in it,” he says of the promotion.

Despite his doubts, Candela took the post. And now, after barely two years, his hesitation seems almost absurd.

How absurd? Last month, Candela addressed an audience of engineers at a New York City conference. “I’m going to make a strong statement,” he warned them. “Facebook today cannot exist without AI. Every time you use Facebook or Instagram or Messenger, you may not realize it, but your experiences are being powered by AI.”

Last November I went to Facebook’s mammoth headquarters in Menlo Park to interview Candela and some of his team, so that I could see how AI suddenly became Facebook’s oxygen. To date, much of the attention around Facebook’s presence in the field has been focused on its world-class Facebook Artificial Intelligence Research group (FAIR), led by renowned neural net expert Yann LeCun. FAIR, along with competitors at Google, Microsoft, Baidu, Amazon, and Apple (now that the secretive company is allowing its scientists to publish), is one of the preferred destinations for coveted grads of elite AI programs. It’s one of the top producers of breakthroughs in the brain-inspired digital neural networks behind recent improvements in the way computers see, hear, and even converse. But Candela’s Applied Machine Learninggroup (AML) is charged with integrating the research of FAIR and other outposts into Facebook’s actual products—and, perhaps more importantly, empowering all of the company’s engineers to integrate machine learning into their work.

Because Facebook can’t exist without AI, it needs all its engineers to build with it.

My visit occurs two days after the presidential election and one day after CEO Mark Zuckerberg blithely remarked that “it’s crazy” to think that Facebook’s circulation of fake news helped elect Donald Trump. The comment would turn out be the equivalent of driving a fuel tanker into a growing fire of outrage over Facebook’s alleged complicity in the orgy of misinformation that plagued its News Feed in the last year. Though much of the controversy is beyond Candela’s pay grade, he knows that ultimately Facebook’s response to the fake news crisis will rely on machine learning efforts in which his own team will have a part.

But to the relief of the PR person sitting in on our interview, Candela wants to show me something else—a demo that embodies the work of his group. To my surprise, it’s something that performs a relatively frivolous trick: It redraws a photo or streams a video in the style of an art masterpiece by a distinctive painter. In fact, it’s reminiscent of the kind of digital stunt you’d see on Snapchat, and the idea of transmogrifying photos into Picasso’s cubism has already been accomplished.

“The technology behind this is called neural style transfer,” he explains. “It’s a big neural net that gets trained to repaint an original photograph using a particular style.” He pulls out his phone and snaps a photo. A tap and a swipe later, it turns into a recognizable offshoot of Van Gogh’s “The Starry Night.” More impressively, it can render a video in a given style as it streams. But what’s really different, he says, is something I can’t see: Facebook has built its neural net so it will work on the phone itself.

Read the source article in Wired.

Key Considerations in AI Vendor Selection, Deployment

The world of artificial intelligence is frightening. No, not the danger of an army of AI-powered robots taking over the world (though that is a bit concerning). The real fear is that the wrong vendor is chosen or the rollout handled poorly. After all, AI is complex, not fully mature, in some cases poorly understood, and […]

The world of artificial intelligence is frightening. No, not the danger of an army of AI-powered robots taking over the world (though that is a bit concerning). The real fear is that the wrong vendor is chosen or the rollout handled poorly. After all, AI is complex, not fully mature, in some cases poorly understood, and involves great changes to how an organization thinks and operates.

Much of the complexity stems from the fact that AI has no single meaning or definition. It is a combination of several elements (machine learning, natural language processing, computer vision and others). This means that use cases tend to be unique and complex. Companies not big enough to hire expertise rely deeply on consultants and vendors, likely more than in more familiar areas. AI is not for the corporate faint of heart.

So how should organizations approach AI?

The first step in any corporate initiative is to fully understand what is on the table. It seems almost needless to say that organizations must educate themselves about AI before taking the plunge. But, in this case, it’s so important that it is worth stating the obvious. They must assess what data they have to feed into the system and if remedial work is necessary to enable that data to be used.

Tractica Research Director Aditya Kaul suggests that organizations understand the difference between the AI platforms that process raw data to reach conclusions and perception-driven approaches that focus on the intricacies and nuances of language and vision. The next step is to experiment on a wide variety of use cases and settle on those that bring the greatest value to the organization. It is important to understand the metrics that will be used to gauge success, such as increased productivity or reduced costs.

Moving Ahead with AI

At that point, they are set to move ahead aggressively. “Once companies have a good understanding of the AI technologies and use cases, they can go [choose] a third-party enterprise-grade AI platform and build a robust framework around data and model warehousing that allows for efficient production-grade AI that can be swiftly deployed into client-facing products and services,” Kaul wrote to IT Business Edge in response to emailed questions.

This suggests deep changes, which makes choosing vendors an even more vital decision than better understood limited technology deployments. The stakes are high. It is a nascent field where some companies no doubt are selling vaporware and some perhaps haven’t figured out their own value proposition. It’s best to be very careful. “If your AI vendor won’t promise you real ROI, it’s because they can’t deliver,” wrote Ben Lamm, the co-founder and CEO of Hypergiant. “If a vendor is trying to skirt around a clear statement of value, you know they won’t serve you well in the long run.” Organizations should do the same block and tackling that is done for any other significant investment. Credentials should be checked, deep conversations conducted and a high comfort level achieved. “One of the most important things enterprises can look for in an AI vendor is understanding the success of their customer base,” wrote Peresh Kharya, the director of Accelerated Computing for NVIDIA. “Don’t be afraid to ask which of their customers are successful and how has their new AI solution benefited their business. Asking this question will help you gauge the tangible business value the vendor is promoting.”

Organizations can take steps to increase the odds that they will choose the right vendor. Dave Damer, the founder and CEO of Testfire Labs, offers three tips. The first two focus on precisely what the vendor will be providing. Companies should ask if the prospective vendor delivers packaged solutions, custom solutions or both, and if it has the necessary expertise in house or must outsource. Finally, the organization must understand what will happen after the deployment is done. “A lack of employee training or further customization of models can lead to unusable and/or ineffective technology,” Damer wrote.

Best of Breed or Single Vendor?

A longstanding debate in telecom and IT circles is whether platforms are better coming from a single vendor or “best in breed” arrangements in which the top elements are cherry picked and strung together. The single vendor platforms presumably are better integrated and have deeper and easier to use management functions, while the best in breed approach potentially offers better performance.

The pendulum is swinging toward multiple vendors, at least according to Tracy Malingo, the senior vice president of Product Strategy at Verint, which bought AI firm Next IT last December. “This is actually one of the biggest shifts that we’ve seen in AI,” Malingo wrote. “As major players have sought to lock in ecosystems and as companies have evolved in their understanding and needs for AI, we’ve seen the market begin to shift toward best of breed over single-source vendors. That trend will continue in the future.”

The bottom line is that AI is a slippery slope: That slope can arc toward more efficient operations and a healthier bottom line – or toward confusion, failed implementations and all the headaches that those results bring on. “Organizations should have a clear understanding of what business issues they’re trying to solve with AI,” wrote Guy Yehiav, the CEO of Profitect. “How will the technology they’re evaluating make an impact to both top and bottom line and what is the approach to roll it out across the business? If analytics and AI are done well, the impact should be quick and results tangible.”

Read the source article at IT Business Edge.

Facebook Poaches Head of Chip Development From Google

Facebook Inc. has sent another signal that it’s serious about building its own semiconductors, joining Apple Inc., Alphabet Inc.’s Google, and Amazon.com Inc. in trying to make its own custom chips. The social-networking giant this month hired Shahriar Rabii to be a vice president and its head of silicon. Rabii previously worked at Google, where he […]

Facebook Inc. has sent another signal that it’s serious about building its own semiconductors, joining Apple Inc., Alphabet Inc.’s Google, and Amazon.com Inc. in trying to make its own custom chips.

The social-networking giant this month hired Shahriar Rabii to be a vice president and its head of silicon. Rabii previously worked at Google, where he helped lead the team in charge of building chips for the company’s devices, including the Pixel smartphone’s custom Visual Core chip, according to his LinkedIn profile. He’ll work under Andrew Bosworth, the company’s head of virtual reality and augmented reality, according to people familiar with the matter.

Spokesmen for Facebook and Google declined to comment on Rabii’s move.

Facebook started forming a team to design chips earlier this year, Bloomberg News reported in April. The Menlo Park, California-based company is working on semiconductors, which can be useful for a variety of different efforts, including to process information for its vast data centers and its artificial intelligence work.

Google has been developing more chips for its future devices. Later this year, the Mountain View, California-based search giant plans to release new Pixel phones with upgraded cameras and an edge-to-edge screen on the new larger model, Bloomberg News reported in May.

Facebook and Google’s moves are part of a trend in which technology companies are seeking to supply themselves with semiconductors and lower their dependence on chipmakers such as Intel Corp. and Qualcomm Inc. Apple has been shipping its own custom main processors in iPads and iPhones since 2010, and has created an array of custom chips for controlling Bluetooth, taking pictures, and conducing machine learning tasks. By 2020, the iPhone maker hopes to start shipping Macs with its own main processors.

Facebook, through its Oculus virtual reality division and Building 8 hardware divisions, is working on several future devices. Earlier this year, the company launched the Oculus Go standalone virtual reality headset with a Qualcomm smartphone chip. Facebook is also working on its first branded hardware: a series of smart speakers with large touch screens that can also be used for video chats.

Future generations of those devices could be improved by custom processors. With its own chips, Facebook also would gain finer control over product development and could better tie together its software and hardware.

Custom chips may also improve the company’s efforts in artificial intelligence. Facebook has been working to use AI to better understand the nature of content people post on social media, so that it can quickly take down hate speech, fake accounts and live videos of violence. But so far, even human moderators are having trouble judging content consistently.

Read the source post at Bloomberg.

Here is the Essential Landscape for Enterprise AI Companies

Enterprise companies comprise a $3.4 trillion market worldwide of which an increasingly larger share is being allocated to artificial intelligence technologies. By our definition, “enterprise” technology companies create tools for workplace roles and functions that a large number of businesses use. For example, Salesforce is the primary enterprise software used by sales professionals in a company. Also […]

Enterprise companies comprise a $3.4 trillion market worldwide of which an increasingly larger share is being allocated to artificial intelligence technologies.

By our definition, “enterprise” technology companies create tools for workplace roles and functions that a large number of businesses use. For example, Salesforce is the primary enterprise software used by sales professionals in a company. Also known as a type of customer relationship management software, or CRM, it is the system of record for sales professionals to enter in their contacts, progress of leads, and for sales metrics to be tracked. Any company directly selling their products and services would benefit from a CRM.

Plenty of enterprise companies use combinations of automated data science, machine learning, and modern deep learning approaches for tasks like data preparation, predictive analytics, and process automation. Many are well-established players with deep domain expertise and product functionality. Others are hot new startups applying artificial intelligence to new problems. We cover a mix of both.

To help you identify the best tools for your business, we’ve broken up our landscape of enterprise AI solutions into functional categories to match organizational workflows and use cases. Most of these enterprise companies can be classified in multiple categories, but we focused on the primary value add and differentiation for each company.

BUSINESS INTELLIGENCE (BI)

This function derives intelligence from company data, encompasses the business applications, tools, and workflows that bring together information from all parts of the company to enable smart analysis. From streamlining data preparation like Paxata and Trifacta, to connecting data more effectively from different silos like Tamr and Alation, and even automating reports and generating narratives like Narrative Science and Yseop, enterprise companies are improving BI workflows with artificial intelligence.

PRODUCTIVITY

Productivity at work is often stunted by a myriad of tiny tasks that consume your attention, i.e. “death by a thousand cuts.” Many productivity tools have emerged to eliminate such tasks, such as the endless back and forth required to schedule meetings. Luckily, many of these productivity tools are virtual scheduling assistants like X.ai, FreeBusy, and Clara Labs.

CUSTOMER MANAGEMENT

Taking care of your customers is no easy task. Enterprise companies have recognized this critical area as ripe for disruption with artificial intelligence. DigitalGenius utilizes AI to sift through your customer service data and automate customer service operations. Inbenta’s AI-powered natural language search enables delivery of self-service support in forums and virtual agents. Luminoso creates visual representations of customer feedback, allowing companies to better understand what consumers want.

HR & TALENT

With the average tenure of a hire getting shorter, hiring and talent management is arguably one of the most difficult areas for every company to tackle. Where can you find the right candidates and how do you keep hires engaged? Companies like Entelo and Scout work from the top of the funnel to get you the most qualified candidates while others like hiQ Labs utilize public data to warn you of staff attrition risks and enable you to create retention strategies.

B2B SALES & MARKETING

No one likes to waste time tediously doing data entry or spend hours sometimes googling and sifting through LinkedIn trying to get that marginal bit of information on a lead. Perhaps that’s why professionals in these functions are willing to embrace and experiment with new tools. Some automate data entry and improve forecasting accuracy like Fusemachines and the AI-powered sales assistant Tact, while others like Lattice Engines and Mintigo utilize thousands of data sources to surface the most qualified prospects and opportunities. You also have Salesforce’s Einstein who has the intention of bringing AI and automation throughout the entire sales ecosystem.

CONSUMER MARKETING

So much data and intelligence can be gathered about your consumers through social channels, distribution channels, media channels, etc. Smart tools can not only crawl through this data, but analyze and understand what’s being said or done. Lexalytics is a text analytics platform that translates billions of unstructured data pieces and online signals into actionable insights for the company. Affinio uses deep learning to surface social fingerprints for the brand by creating interest-based clusters. Brands now have a better understanding of their customer segments, behaviors, and sentiments.

FINANCE & OPERATIONS

Finance & operations includes the back office, forecasting, accounting, and operational roles required to run a company. Since nobody likes paperwork, this area is ripe for automation. HyperScience recently came out of stealth with their $18 million Series A in December of 2016 to completely automate back office operations like form processing and data extraction through AI. Another company called AppZen is an automated audit platform that can instantly detect fraud and compliance issues, freeing up T&E teams from tedious manual audits and checks. The tools in this space reap immediate returns for companies due to the volume and repetitive nature of some of the tasks.

Read the source article at TopBots.

Alibaba and SenseTime Team to Make Hong Kong a Global AI Hub

Alibaba  is teaming up with SenseTime,  the world’s highest-valued AI startup, to launch a not-for-profit artificial intelligence lab in Hong Kong in a bid to make the city a global hub for artificial intelligence. Alibaba, which is SenseTime’s largest single investor thanks to a recent $600 million round at a valuation of $4.5 billion, is providing financing for the […]

Alibaba  is teaming up with SenseTime,  the world’s highest-valued AI startup, to launch a not-for-profit artificial intelligence lab in Hong Kong in a bid to make the city a global hub for artificial intelligence.

Alibaba, which is SenseTime’s largest single investor thanks to a recent $600 million round at a valuation of $4.5 billion, is providing financing for the “HKAI Lab” through its Hong Kong entrepreneurship fund. SenseTime said it will contribute too, although the total amount of capital backing the initiative hasn’t been revealed.

The partners of the project — which also includes the Hong Kong Science and Technology Parks Corporation (HKSTP) — said the aim is to “advance the frontiers of AI,” which includes helping startups commercialize their technology, develop ideas and promote knowledge sharing in the AI field.

That’s all fairly general — Alibaba has a track record of politicking through technology investment schemes in Greater China and Southeast Asia — but one tangible project is a six-month accelerator program planned for September which will welcome AI startups to the HKAI Lab. Alibaba’s Cloud business and HKSTP are among the backers that will help the program offer early-stage funding to successful applicants, while Alibaba and SenseTime will help with mentoring and development during the program.

“Alibaba sees AI as a fundamental technology that will make a difference to society,” Alibaba executive vice chairman Joe Tsai said in a statement. “We envision the Hong Kong AI Lab to be an open platform where researchers, startups and industry participants can collaborate and build a culture of innovation.”

China and the U.S. are the two biggest players in the global AI battle; this project alone won’t divert that, but it could stir up potential in Hong Kong.

Alibaba maintains tight relationships in Hong Kong, particularly through the fund which is around $130 million in size. While the program is ostensibly aimed at promoting startups in Hong Kong, particularly around AI, it is also sure to galvanize Alibaba’s ties to Hong Kong’s establishment and tech community. Hong Kong is growing as a destination for startups, as a number of the city-state’s key players discussed at a TechCrunch China event last year, but still the issue of talent is a key one and this initiative could benefit Hong Kong in that respect.

See the source article at TechCrunch. 

Beware of AI’s Dark Side, Warns Google Cofounder Sergey Brin

Artificial Intelligence is a recurring theme in recent remarks by top executives at Alphabet. The company’s latest Founders’ Letter, penned by Sergey Brin, is no exception—but he also finds time to namecheck possible downsides around safety, jobs, and fairness. The company has issued a Founders’ Letter—usually penned by Brin, cofounder Larry Page or both—every year, beginning with the […]

Artificial Intelligence is a recurring theme in recent remarks by top executives at Alphabet. The company’s latest Founders’ Letter, penned by Sergey Brin, is no exception—but he also finds time to namecheck possible downsides around safety, jobs, and fairness.

The company has issued a Founders’ Letter—usually penned by Brin, cofounder Larry Page or both—every year, beginning with the letter that accompanied Google’s 2004 IPO. Machine learning and artificial intelligence have been mentioned before. But this year Brin expounds at length on a recent boom in development in AI that he describes as a “renaissance.”

“The new spring in artificial intelligence is the most significant development in computing in my lifetime,” Brin writes—no small statement from a man whose company has already wrought great changes in how people and businesses use computers.

When Google was founded in 1998, Brin writes, the machine learning technique known as artificial neural networks, invented in the 1940s and loosely inspired by studies of the brain, was “a forgotten footnote in computer science.” Today the method is the engine of the recent surge in excitement and investment around artificial intelligence. The letter unspools a partial list of where Alphabet uses neural networks, for tasks such as enabling self-driving cars to recognize objects, translating languages, adding captions to YouTube videos, diagnosing eye disease, and even creating better neural networks.

Brin nods to the gains in computing power that have made this possible. He says the custom AI chip running inside some Google servers is more than a million times more powerful than the Pentium II chips in Google’s first servers. In a flash of math humor, he says that Google’s quantum computing chips might one day offer jumps in speed over existing computers that can be only be described with the number that gave Google its name, a googol, or a 1 followed by 100 zeroes.

As you might expect, Brin expects Alphabet and others to find more uses for AI. But he also acknowledges that the technology brings possible downsides. “Such powerful tools also bring with them new questions and responsibilities,” he writes.

AI tools might change the nature and number of jobs, or be used to manipulate people, Brin says—a line that may prompt readers to think of concerns around political manipulation on Facebook. Safety worries range from “fears of sci-fi style sentience to the more near-term questions such as validating the performance of self-driving cars,” Brin writes.

All that might sound like a lot for Google and the tech industry to contemplate while also working at full speed to squeeze profits from new AI technology. Even some Google employees aren’t sure the company is on the right track—thousands signed a letter protesting the company’s contract with the Pentagon to apply machine learning to video from drones.

Brin doesn’t mention that challenge, and wraps up his discussion of AI’s downsides on a soothing note. His letter points to the company’s membership in industry group Partnership on AI, and Alphabet’s research in areas such as how to make learning software that doesn’t cheat), and AI software whose decisions are more easily understood by humans. “I expect machine learning technology to continue to evolve rapidly and for Alphabet to continue to be a leader — in both the technological and ethical evolution of the field,” Brin writes.

Read the source article in Wired.

Even at a Nonprofit, AI Researchers are Making More Than $1 Million

One of the poorest-kept secrets in Silicon Valley has been the huge salaries and bonuses that experts in artificial intelligence can command. Now, a little-noticed tax filing by a research lab called OpenAI has made some of those eye-popping figures public. OpenAI paid its top researcher, Ilya Sutskever (shown above), more than $1.9 million in […]

One of the poorest-kept secrets in Silicon Valley has been the huge salaries and bonuses that experts in artificial intelligence can command. Now, a little-noticed tax filing by a research lab called OpenAI has made some of those eye-popping figures public.

OpenAI paid its top researcher, Ilya Sutskever (shown above), more than $1.9 million in 2016. It paid another leading researcher, Ian Goodfellow, more than $800,000 — even though he was not hired until March of that year. Both were recruited from Google.

A third big name in the field, the roboticist Pieter Abbeel, made $425,000, though he did not join until June 2016, after taking a leave from his job as a professor at the University of California, Berkeley. Those figures all include signing bonuses.

The figures listed on the tax forms, which OpenAI is required to release publicly because it is a nonprofit, provide new insight into what organizations around the world are paying for A.I. talent. But there is a caveat: The compensation at OpenAI may be underselling what these researchers can make, since as a nonprofit it can’t offer stock options.

Salaries for top A.I. researchers have skyrocketed because there are not many people who understand the technology and thousands of companies want to work with it. Element AI, an independent lab in Canada, estimates that 22,000 people worldwide have the skills needed to do serious A.I. research — about double from a year ago.

“There is a mountain of demand and a trickle of supply,” said Chris Nicholson, the chief executive and founder of Skymind, a start-up working on A.I.

That raises significant issues for universities and governments. They also need A.I. expertise, both to teach the next generation of researchers and to put these technologies into practice in everything from the military to drug discovery. But they could never match the salaries being paid in the private sector.

In 2015, Elon Musk, the chief executive of the electric-car maker Tesla, and other well-known figures in the tech industry created OpenAI and moved it into offices just north of Silicon Valley in San Francisco. They recruited several researchers with experience at Google and Facebook, two of the companies leading an industrywide push into artificial intelligence.

In addition to salaries and signing bonuses, the internet giants typically compensate employees with sizable stock options — something that OpenAI does not do. But it has a recruiting message that appeals to idealists: It will share much of its work with the outside world, and it will consciously avoid creating technology that could be a danger to people.

“I turned down offers for multiple times the dollar amount I accepted at OpenAI,” Mr. Sutskever said. “Others did the same.” He said he expected salaries at OpenAI to increase as the organization pursued its “mission of ensuring powerful A.I. benefits all of humanity.”

OpenAI spent about $11 million in its first year, with more than $7 million going to salaries and other employee benefits. It employed 52 people in 2016.

People who work at major tech companies or have entertained job offers from them have told The New York Times that A.I. specialists with little or no industry experience can make between $300,000 and $500,000 a year in salary and stock. Top names can receive compensation packages that extend into the millions.

“The amount of money was borderline crazy,” Wojciech Zaremba, a researcher who joined OpenAI after internships at Google and Facebook, told Wired. While he would not reveal exact numbers, Mr. Zaremba said big tech companies were offering him two or three times what he believed his real market value was.

At DeepMind, a London A.I. lab now owned by Google, costs for 400 employees totaled $138 million in 2016, according to the company’s annual financial filings in Britain. That translates to $345,000 per employee, including researchers and other staff.

Researchers like Mr. Sutskever specialize in what are called neural networks, complex algorithms that learn tasks by analyzing vast amounts of data. They are used in everything from digital assistants in smartphones to self-driving cars.

Some researchers may command higher pay because their names carry weight across the A.I. community and they can help recruit other researchers.

Mr. Sutskever was part of a three-researcher team at the University of Toronto that created key so-called computer vision technology. Mr. Goodfellow invented a technique that allows machines to create fake digital photos that are nearly indistinguishable from the real thing.

“When you hire a star, you are not just hiring a star,” Mr. Nicholson of the start-up Skymind said. “You are hiring everyone they attract. And you are paying for all the publicity they will attract.”

Other researchers at OpenAI, including Greg Brockman, who leads the lab alongside Mr. Sutskever, did not receive such high salaries during the lab’s first year.

In 2016, according to the tax forms, Mr. Brockman, who had served as chief technology officer at the financial technology start-up Stripe, made $175,000. As one of the founders of the organization, however, he most likely took a salary below market value. Two other researchers with more experience in the field — though still very young — made between $275,000 and $300,000 in salary alone in 2016, according to the forms.

Though the pool of available A.I. researchers is growing, it is not growing fast enough. “If anything, demand for that talent is growing faster than the supply of new researchers, because A.I. is moving from early adopters to wider use,” Mr. Nicholson said.

That means it can be hard for companies to hold on to their talent. Last year, after only 11 months at OpenAI, Mr. Goodfellow returned to Google. Mr. Abbeel and two other researchers left the lab to create a robotics start-up, Embodied Intelligence. (Mr. Abbeel has since signed back on as a part-time adviser to OpenAI.) And another researcher, Andrej Karpathy, left to become the head of A.I. at Tesla, which is also building autonomous driving technology.

In essence, Mr. Musk was poaching his own talent. Since then, he has stepped down from the OpenAI board, with the lab saying this would allow him to “eliminate a potential future conflict.”

Read the source article at The New York Times (via CNBC).

Tale of Two Amazons: Average Pay is $28.5K; Software Engineers Earn North of $100K

Amazon disclosed in a filing Wednesday (April 19) that the median pay for its employees was $28,446 in 2017. Put another way: half of Amazon’s employees earned less than that amount. In the same year, Amazon created more than 130,000 jobs, many of them for AI scientists. The underwhelming figure was made public as part of a new rule put into […]

Amazon disclosed in a filing Wednesday (April 19) that the median pay for its employees was $28,446 in 2017. Put another way: half of Amazon’s employees earned less than that amount. In the same year, Amazon created more than 130,000 jobs, many of them for AI scientists.

The underwhelming figure was made public as part of a new rule put into effect this year by the Securities and Exchange Commission requiring companies to disclose the pay ratio between their CEOs and overall employees.

Jeff Bezos, Amazon’s CEO and the world’s richest man, received a total compensation of about $1.68 million last year — or 59 times the median Amazon employee compensation.

Call it a tale of two Amazons: those who work in technical roles and those who work in warehouses and grocery stores.

Amazon said in a statement provided to CNN that the median pay includes “global, full and part-time” employees across “every area of the company.”

“In every country and every sector where we employee people, we offer highly competitive wage and benefits such as company stock, health insurance and retirement savings, innovative parental leave, and training for in-demand jobs through our Career Choice program,” the company said.

Amazon now has more than half a million employees worldwide, thanks in large part to a heavy investment in fulfillment centers and its $13.7 billion acquisition of Whole Foods, which had about 87,000 employees when the deal was announced.

Bezos said in a letter to shareholders Wednesday that Amazon created more than 130,000 jobs last year alone, not counting acquisitions. Those new jobs “cover a wide range of professions, from artificial intelligence scientists to packaging specialists to fulfillment center associates,” he wrote.

But the artificial intelligence scientists are bound to make substantially more than the fulfillment center workers.

The average salary for software engineers at Amazon is north of $100,000, according to data from PayScale, a salary comparison service.

By comparison, a full-time warehouse associate at one of Amazon’s fulfillment centers in New Jersey could make as much as $13.85 per hour, according to a current job posting. That would come out to about the same as last year’s median pay.

Read the source article at CNNtech.com.

Here Are 14 Amazing Facts About Alibaba’s Co-Founder Jack Ma

Jack Ma is one of the richest people in China, and his way to the top has been a long and tough journey. A business magnate and philanthropist, Jack Ma is the cofounder of Alibaba, a conglomerate that’s focused on technology, artificial intelligence, retail, e-commerce, and the internet. If that leaves a lot to the imagination, […]

Jack Ma is one of the richest people in China, and his way to the top has been a long and tough journey.

A business magnate and philanthropist, Jack Ma is the cofounder of Alibaba, a conglomerate that’s focused on technology, artificial intelligence, retail, e-commerce, and the internet. If that leaves a lot to the imagination, think something along the lines of a combination of eBay and Amazon.

Something as impressive as starting a company and turning it into one as big as Alibaba naturally spark the interest of many. Getting a closer look at Ma’s history and tidbits about him could be just the thing to know more about how it came to be.

As of this writing, he’s behind only Tencent Holding’s CEO and chairman Ma Huateng, making him the second richest in China, according to Forbes. On the international stage, he’s in the 20th spot.

Beyond his home country of China, Ma may not be a household name such as Steve Jobs, Bill Gates, Warren Buffett, Mark Zuckerberg, or Jeff Bezos just yet. However, all that has been changing since his accomplishments and whatnot have made their way to the worldwide scene.

For starters, Alibaba shares opened at $92.70 per piece, which is the biggest initial public offering or IPO in the history of the United States.

Now to whet the appetite: his real name is Ma Yun. That’s just one of the many amazing facts about Ma, and there’s a lot more to find out.

He Started Out As An English Teacher, Earning $12 To $15 Per Month

After he graduated from Hangzhou Teachers University – now known as Hangzhou Normal University – with a bachelor’s degree in English in 1988, Ma was the only one chosen out of 500 students to be a university teacher.

It was a stroke of good luck, and it was probably an honor. During his stint as an English teacher, he was earning somewhere between 100 to 120 Renminbi a month. At the time, that was equivalent to roughly about $12 to $15.

He spent a total of five years teaching before he moved on to other things, including but not limited to starting his own businesses.

He Learned English By Giving Visitors Tours Free Of Charge For 8 Years

When he was 12 years old, Ma had a strong desire to learn the English language.

To do that, he would give foreigners tours for free, riding his bicycle during the early hours of each morning to a hotel in Hangzhou that’s at least 40 minutes away.

He would then improve his English by conversing with the visitors as they go through the tours. Not only that, but he also learned “Western people’s system, ways, methods and techniques.” In that, he developed a globalized view, which was in conflict with what his teachers and studies taught him.

This went on for eight years.

He Flunked His University Application To Hangzhou Teachers’ University Not Once, But Twice

With billionaires such as Gates and Zuckerberg dropping out of Harvard University, it’s easy to mistake Ma as following in their footstep.

The thing is, his story didn’t go like that at all. He flunked his university admission exam at Hangzhou Teachers University two times. In an interview with Inc., he even said that the university could be considered as the worst in the city.

What’s more, he wasn’t really a good student to begin with.

“I failed a key primary school test two times, I failed the middle school test three times, I failed the college entrance exam two times and when I graduated, I was rejected for most jobs I applied for out of college,” he said.

Needless to say, Ma didn’t let his failures stop him or slow him down on his way to success.

He Was Rejected By Harvard 10 Times

During the World Economic Forum 2015, Ma revealed that he was rejected by Harvard University.

For most people, one rejection is enough to stop them, but that wasn’t the case for Ma. He applied over and over again to the university until he was rejected a total of 10 times.

“I applied for Harvard ten times, got rejected ten times and I told myself that ‘Someday I should go teach there,’” he said.

In 2002, he gave a speech in Harvard where he was called a “mad man” for his way of managing Alibaba by a CEO of a foreign company, whose mind was changed after Ma invited him for a three-day stay at his business.

Ma earned his Master of Business Administration degree or MBA from Cheung Kong Graduate School of Business.

He Was Rejected For A Job Application At KFC

Right after leaving behind his five-year career as an English teacher in 1995, Ma started his search for other opportunities.

One of the 30 jobs he set his eyes on was a local KFC branch in Hangzhou, but he was rejected by the fast-food restaurant. To add insult to injury, he was the only applicant who was rejected out of 24 candidates. In other words, the other 23 people who applied got in, and Ma was the only person who didn’t get hired.

After that, he went on to pursue his own business, which was a small-time translation and interpretation company.

Read the source article in TechTimes.

5 Truths About Artificial Intelligence Everyone Should Know

By Rana el Kaliouby, Co-founder and CEO, Affectiva  Last week, I was in LA for the premiere of a new AI documentary, “Do you trust this computer?” (See video link below.) It was a full house with a few hundred audience members. I was one of the AI scientists featured in the documentary along with big wigs […]

By Rana el Kaliouby, Co-founder and CEO, Affectiva
 Last week, I was in LA for the premiere of a new AI documentary, “Do you trust this computer?” (See video link below.) It was a full house with a few hundred audience members. I was one of the AI scientists featured in the documentary along with big

wigs like Elon Musk, Stuart Russell, Andrew NG and writers Jonathan Nolan and John Markoff. Elon Musk kicked off the evening with director Chris Paine, emphasizing how AI was an important topic that could very well determine the future of humanity.The excitement in the air was palpable. I was one of seven “AI experts” who were to be invited on stage after the screening for a Q&A session with the audience. Shivon Zilis, Project Director of OpenAI and myself were the only women.
The documentary did an excellent job surveying the research and applications of AI, from automation and robots, to medicine, automated weapons, social media and data, as well as the future of the relationship between humans and machines. The work my team and I are doing provided a clear example of the good that can come out of AI.

As I watched in my seat, I could hear the audience gasp at times, and I couldn’t help but notice a couple of things: for one, there was this foregone assumption that AI is out to get us, and two, this field is still so incredibly dominated by men – white men specifically. Other than myself, there were two other women featured–compared to about a dozen males. But it wasn’t just the numbers–it was the total air time. The majority of the time, the voice on screen was a male. I vowed that on stage that night, I would make my voice heard.

Here are some of my key thoughts coming out of the premiere and dialogue around it:
1. AI is in dire need of diversity.

The first question asked from the audience was, “Do you see an alternative narrative here–one that is more optimistic?” YES, I chimed in quoting Yann LeCun, head of AI research at Facebook and professor at NYU: “Intelligence is not correlated with the desire to dominate. Testosterone is!” I added that we need diversity in technology–gender diversity, ethnic diversity, and diversity of backgrounds and experiences. Perhaps if we did that, the rhetoric around AI would be more about compassion and collaboration, and less about taking over the world. The audience applauded.

2. Technology is neutral–we, as a society, decide whether we use it for good or bad.

That has been true throughout history. AI has so much potential for good. As thought leaders in the AI space, we need to advocate for these use cases and educate the world about the potentials for abuse so that the public is involved in a transparent discussion about these use cases. In a sense that’s what is so powerful about this documentary. It will not only educate the public but will spark a conversation with the public that is so desperately needed.

My company, Affectiva joined leading technology companies in the Partnership on AI–a consortium of companies the likes of Amazon, Google, Apple, and many more, that is working to set a standard for ethical uses of AI. Yes, regulation and legislation are important, but too often that lags, so it’s up to leaders in the industry to spearhead these discussions and action it accordingly. To that end, ethics also needs to become a mandatory component of AI education.

3. We need to ensure that AI is equitable, accountable, transparent and inclusive.

The real problem is not the existential threat of AI. Instead, it is in the development of ethical AI systems. Unfortunately today, many are accidentally building bias into AI systems that perpetuate the racial, gender, and ethnic biases existing in society today. In addition, it is not clear who is accountable for AI’s behavior as it is applied across industries. Take the recent tragic accident where a self-driving Uber vehicle killed a pedestrian. It so happens that in that case, there was a safety driver in the car. But who is responsible: the vehicle? The driver? The company? These are incredibly difficult questions, but we need to set standards around accountability for AI to ensure proper use.

4. It’s a partnership, not a war.

I don’t agree with the view that it’s humans vs. machines. With so much potential for AI to be harnessed for good (assuming we take the necessary steps outlined above), we need to shift the dialogue to see the relationship as a partnership between humans and machines. There are several areas where this is the case:

  • Medicine. For example, take mental health conditions such as autism or depression. It is estimated that we have a need for 15,000 mental health professionals in the United States alone. That number is huge, and it doesn’t even factor in countries around the world where the need is even greater. Virtual therapists and social robots can augment human clinicians using AI to build rapport with patients at home, being preemptive, and getting patients just-in-time help. AI alone is not enough, and will not take doctors’ place. But there’s potential for the technology, together with human professionals, to expand what’s possible with healthcare today.
  • Autonomous driving vehicles. While we are developing these systems, these systems will fail as they keep getting better. The role of the human co-pilot or safety driver is critical. For example, there are already cameras facing the driver in many vehicles, that monitor if a human driver is paying attention or distracted. This is key in ensuring that, in a case where a semi-autonomous vehicle must pass control back to a human driver, the person is actually ready and able to take over safely. This collaboration between AI and humans will be critical to ensure safety as autonomous vehicles continue to take the streets around us.
    5. AI needs Emotional intelligence.

    AI today has a high IQ but a low EQ, or emotional intelligence. But I do believe that the merger of EQ and IQ in technology is inevitable, as so many of our decisions, both personal and professional, are driven by emotions and relationships. That’s why we’re seeing a rise in relational and conversational technologies like Amazon Alexa and chatbots. Still, they’re lacking emotion. It’s inevitable that we will continue to spend more and more time with technology and devices, and while many (rightly) believe that this is degrading our humanity and ability to connect with one another, I see an opportunity. With Emotion AI, we can inject humanity back into our connections, enabling not only our devices to better understand us, but fostering a stronger connection between us as individuals.

    While I am an optimist, I am not naive.

    Following the panel, I received an incredible amount of positive feedback. The audience appreciated the optimistic point of view. But that doesn’t mean I am naive or disillusioned. I am part of the World Economic Forum Global Council on Robotics and AI, and we spend a fair amount of our time together as a group discussing ethics, best practices, and the like. I realize that not everyone is putting ethics in consideration. That is definitely a concern. I do worry that organizations and even governments who own AI and data will have a competitive advantage and power, and those who don’t will be left behind.

    The good news is: we, as a society, are designing those systems. We get to define the rules of the game.

    AI is not an existential threat. It’s potentially an existential benefit–if we make it that way. At the screening, there were so many young people in the audience watching. I am hopeful that the documentary renews our commitment to AI ethics and inspires us to apply AI for good.

    Link to video, Do you Trust this Computer?

    Learn more about Affectiva.