A Look Inside Facebook’s AI Machine

By Steven Levy, Wired When asked to head Facebook’s Applied Machine Learning group — to supercharge the world’s biggest social network with an AI makeover — Joaquin Quiñonero Candela hesitated. It was not that the Spanish-born scientist, a self-described “machine learning (ML) person,” hadn’t already witnessed how AI could help Facebook. Since joining the company in […]

By Steven Levy, Wired

When asked to head Facebook’s Applied Machine Learning group — to supercharge the world’s biggest social network with an AI makeover — Joaquin Quiñonero Candela hesitated. It was not that the Spanish-born scientist, a self-described “machine learning (ML) person,” hadn’t already witnessed how AI could help Facebook. Since joining the company in 2012, he had overseen a transformation of the company’s ad operation, using an ML approach to make sponsored posts more relevant and effective. Significantly, he did this in a way that empowered engineers in his group to use AI even if they weren’t trained to do so, making the ad division richer overall in machine learning skills. But he wasn’t sure the same magic would take hold in the larger arena of Facebook, where billions of people-to-people connections depend on fuzzier values than the hard data that measures ads. “I wanted to be convinced that there was going to be value in it,” he says of the promotion.

Despite his doubts, Candela took the post. And now, after barely two years, his hesitation seems almost absurd.

How absurd? Last month, Candela addressed an audience of engineers at a New York City conference. “I’m going to make a strong statement,” he warned them. “Facebook today cannot exist without AI. Every time you use Facebook or Instagram or Messenger, you may not realize it, but your experiences are being powered by AI.”

Last November I went to Facebook’s mammoth headquarters in Menlo Park to interview Candela and some of his team, so that I could see how AI suddenly became Facebook’s oxygen. To date, much of the attention around Facebook’s presence in the field has been focused on its world-class Facebook Artificial Intelligence Research group (FAIR), led by renowned neural net expert Yann LeCun. FAIR, along with competitors at Google, Microsoft, Baidu, Amazon, and Apple (now that the secretive company is allowing its scientists to publish), is one of the preferred destinations for coveted grads of elite AI programs. It’s one of the top producers of breakthroughs in the brain-inspired digital neural networks behind recent improvements in the way computers see, hear, and even converse. But Candela’s Applied Machine Learninggroup (AML) is charged with integrating the research of FAIR and other outposts into Facebook’s actual products—and, perhaps more importantly, empowering all of the company’s engineers to integrate machine learning into their work.

Because Facebook can’t exist without AI, it needs all its engineers to build with it.

My visit occurs two days after the presidential election and one day after CEO Mark Zuckerberg blithely remarked that “it’s crazy” to think that Facebook’s circulation of fake news helped elect Donald Trump. The comment would turn out be the equivalent of driving a fuel tanker into a growing fire of outrage over Facebook’s alleged complicity in the orgy of misinformation that plagued its News Feed in the last year. Though much of the controversy is beyond Candela’s pay grade, he knows that ultimately Facebook’s response to the fake news crisis will rely on machine learning efforts in which his own team will have a part.

But to the relief of the PR person sitting in on our interview, Candela wants to show me something else—a demo that embodies the work of his group. To my surprise, it’s something that performs a relatively frivolous trick: It redraws a photo or streams a video in the style of an art masterpiece by a distinctive painter. In fact, it’s reminiscent of the kind of digital stunt you’d see on Snapchat, and the idea of transmogrifying photos into Picasso’s cubism has already been accomplished.

“The technology behind this is called neural style transfer,” he explains. “It’s a big neural net that gets trained to repaint an original photograph using a particular style.” He pulls out his phone and snaps a photo. A tap and a swipe later, it turns into a recognizable offshoot of Van Gogh’s “The Starry Night.” More impressively, it can render a video in a given style as it streams. But what’s really different, he says, is something I can’t see: Facebook has built its neural net so it will work on the phone itself.

Read the source article in Wired.

Facebook Poaches Head of Chip Development From Google

Facebook Inc. has sent another signal that it’s serious about building its own semiconductors, joining Apple Inc., Alphabet Inc.’s Google, and Amazon.com Inc. in trying to make its own custom chips. The social-networking giant this month hired Shahriar Rabii to be a vice president and its head of silicon. Rabii previously worked at Google, where he […]

Facebook Inc. has sent another signal that it’s serious about building its own semiconductors, joining Apple Inc., Alphabet Inc.’s Google, and Amazon.com Inc. in trying to make its own custom chips.

The social-networking giant this month hired Shahriar Rabii to be a vice president and its head of silicon. Rabii previously worked at Google, where he helped lead the team in charge of building chips for the company’s devices, including the Pixel smartphone’s custom Visual Core chip, according to his LinkedIn profile. He’ll work under Andrew Bosworth, the company’s head of virtual reality and augmented reality, according to people familiar with the matter.

Spokesmen for Facebook and Google declined to comment on Rabii’s move.

Facebook started forming a team to design chips earlier this year, Bloomberg News reported in April. The Menlo Park, California-based company is working on semiconductors, which can be useful for a variety of different efforts, including to process information for its vast data centers and its artificial intelligence work.

Google has been developing more chips for its future devices. Later this year, the Mountain View, California-based search giant plans to release new Pixel phones with upgraded cameras and an edge-to-edge screen on the new larger model, Bloomberg News reported in May.

Facebook and Google’s moves are part of a trend in which technology companies are seeking to supply themselves with semiconductors and lower their dependence on chipmakers such as Intel Corp. and Qualcomm Inc. Apple has been shipping its own custom main processors in iPads and iPhones since 2010, and has created an array of custom chips for controlling Bluetooth, taking pictures, and conducing machine learning tasks. By 2020, the iPhone maker hopes to start shipping Macs with its own main processors.

Facebook, through its Oculus virtual reality division and Building 8 hardware divisions, is working on several future devices. Earlier this year, the company launched the Oculus Go standalone virtual reality headset with a Qualcomm smartphone chip. Facebook is also working on its first branded hardware: a series of smart speakers with large touch screens that can also be used for video chats.

Future generations of those devices could be improved by custom processors. With its own chips, Facebook also would gain finer control over product development and could better tie together its software and hardware.

Custom chips may also improve the company’s efforts in artificial intelligence. Facebook has been working to use AI to better understand the nature of content people post on social media, so that it can quickly take down hate speech, fake accounts and live videos of violence. But so far, even human moderators are having trouble judging content consistently.

Read the source post at Bloomberg.

Social Media is Causing Trypophobia

Something is rotten in the state of technology. Amid all the hand-wringing over fake news, the cries of election deforming Kremlin disinformation plots, the calls from political podia for tech giants to locate a social conscience, a knottier realization is taking shape. Fake news and disinformation are just a few of the symptoms of what’s […]

Something is rotten in the state of technology.

Amid all the hand-wringing over fake news, the cries of election deforming Kremlin disinformation plots, the calls from political podia for tech giants to locate a social conscience, a knottier realization is taking shape.

Fake news and disinformation are just a few of the symptoms of what’s wrong and what’s rotten. The problem with platform giants is something far more fundamental.

The problem is these vastly powerful algorithmic engines are blackboxes. And, at the business end of the operation, each individual user only sees what each individual user sees.

The great lie of social media has been to claim it shows us the world. And their follow-on deception: That their technology products bring us closer together.

In truth, social media is not a telescopic lens — as the telephone actually was — but an opinion-fracturing prism that shatters social cohesion by replacing a shared public sphere and its dynamically overlapping discourse with a wall of increasingly concentrated filter bubbles.

Social media is not connective tissue but engineered segmentation that treats each pair of human eyeballs as a discrete unit to be plucked out and separated off from its fellows.

Think about it, it’s a trypophobic’s nightmare.

Or the panopticon in reverse — each user bricked into an individual cell that’s surveilled from the platform controller’s tinted glass tower.

Little wonder lies spread and inflate so quickly via products that are not only hyper-accelerating the rate at which information can travel but deliberately pickling people inside a stew of their own prejudices.

First it panders then it polarizes then it pushes us apart.

We aren’t so much seeing through a lens darkly when we log onto Facebook or peer at personalized search results on Google, we’re being individually strapped into a custom-moulded headset that’s continuously screening a bespoke movie — in the dark, in a single-seater theatre, without any windows or doors.

Are you feeling claustrophobic yet?

It’s a movie that the algorithmic engine believes you’ll like. Because it’s figured out your favorite actors. It knows what genre you skew to. The nightmares that keep you up at night. The first thing you think about in the morning.

It knows your politics, who your friends are, where you go. It watches you ceaselessly and packages this intelligence into a bespoke, tailor-made, ever-iterating, emotion-tugging product just for you.

Read the source article at TechCrunch.

An ecosystem perspective on Microsoft’s acquisition of LinkedIn

Ecosystem view based on investment styles



Providing a perspective on this ecosystem must be seen in the context of a number of limitations. Business ecosystems are formed based on basic connections between human and organizational relationships. Some of these relationships are physical and others are inferred for example, an organization employs a CEO, CFO, etc and operates primarily at a location and sells a number of products and services to a defined market. Other relationships include sub-organizations, funding and venture related interactions.

Investment styles vary widely, but they all have venture activity and seed funding projects. The ecosystem is tightly integrated with competitive forces creating tight boundaries across services and products offered. If Microsoft wants to get access to the wider Salesforce market, it needed to find the primary data play and enable the core products across their platforms.

Microsoft market performance



Microsoft had a rocky few years under Balmer, but Satya’s focus brought it back on track. He took over the reigns in 2014 and started acquiring key organizations to bolster the core future focus of Microsoft as a key digital player in the cloud.


LinkedIn’s acquisition might not have made sense initially, but the competitive ecosystem showed that primary data is a key competitive force. Google, Apple, etc all focused on owning primary client data and leveraged is across their products. Salesforce already had well developed models as their dominance continued.

Competitive graph approach

The result is an overlapping graph that enables multiple services beyond CRM with primary data. But, not just any data – LinkedIn is the most widely used business network available today. Google’s knowledge graph, Facebook’s social graph and LinkedIn’s economic graph have all created an entirely new way of looking at the world. It’s not that network theory is new, but that these companies have socialized the use of network data across many different constructs. Google has both primary data and links to the world’s knowledge, Facebook has the same and LinkedIn captured a key portion of the incredibly important world.



Looking at all inferences across the ecosystem, it became clear that LinkedIn was a lone player amongst many dominant multi-service strategic technology companies. To further explore and exploit the value created by the vast LinkedIn network it needed to move into a different competitive category. This move will create market pressures where Salesforce and Microsoft ended up fighting over LinkedIn.

Bottom Line

The combination of the new Microsoft strategy and the ability to compete with other big players like Amazon AWS and Google Cloud will propel them into a new competitive space.