Unboxing Google’s 7 New Principles of Artificial Intelligence

By Ivan Rodriguez, founder, Geek on Record, and a software engineering manager at Microsoft How many times have you heard that Artificial Intelligence (AI) is humanity’s biggest threat? Some people think that Google brought us a step closer to a dark future when Duplex was announced last month, a new capability of Google’s digital Assistant […]

By Ivan Rodriguez, founder, Geek on Record, and a software engineering manager at Microsoft

How many times have you heard that Artificial Intelligence (AI) is humanity’s biggest threat? Some people think that Google brought us a step closer to a dark future when Duplex was announced last month, a new capability of Google’s digital Assistant that enables it to make phone calls on your behalf to book appointments with small businesses.

The root of the controversy lied on the fact that the Assistant successfully pretended to be a real human, never disclosing its true identity to the other side of the call. Many tech experts wondered if this is an ethical practice or if it’s necessary to hide the digital nature of the voice.

Google was also criticized last month by another sensitive topic: the company’s involvement in a Pentagon program that uses AI to interpret video imagery and could be used to improve the targeting of drone strikes. Thousands of employees signed a letter protesting the program and asking for change:

“We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.”

A “clear policy” around AI is a bold ask because none of the big players have ever done it before, and for good reasons. It is such a new and powerful technology that it’s still unclear how many areas of our life will we dare to infuse with it, and it’s difficult to set rules around the unknown. Google Duplex is a good example of this, it’s a technological development that we would have considered “magical” 10 years ago, that today scares many people.

Regardless, Sundar Pichai not only complied with the request, but took it a step further by creating 7 principles that the company will promote and enforce as one of the industry drivers of AI. Here are some remarks on each of them:

1. Be socially beneficial

For years, we have dealt with comfortable boundaries, creating increasingly intelligent entities in very focused areas. AI is now getting the ability to switch between different domain areas in a transparent way for the user. For example, having an AI that knows your habits at home is very convenient, especially when your home appliances are connected to the same network. When that same AI also knows your habits outside home, like your favorite restaurants, your friends, your calendar, etc., its influence in your life can become scary. It’s precisely this convenience that is pushing us out of our comfort zone.

This principle is the most important one since it bows to “respect cultural, social, and legal norms”. It’s a broad principle, but it’s intended to ease that uncomfortable feeling by adapting AI to our times and letting it evolve at the same pace as our social conventions do.

2. Avoid creating or reinforcing unfair bias

AI can become racist if we allow it. A good example of this happened in March 2016, when Microsoft unveiled an AI with a Twitter interface and in less than a day people taught it the worst aspects of our humanity. AI learns by example, so ensuring that safeguards are in place to avoid this type of situations is critical. Our kids are going to grow in a world increasingly assisted by AI, so we need to educate the system before it’s exposed to internet trolls and other bad players.

3. Be built and tested for safety

This point goes hand in hand with the previous one. In fact, Microsoft’s response to the Tai fiasco was to take it down and admit an oversight on the type of scenarios that the AI was tested against. Safety should always be one of the first considerations when designing an AI.

4. Be accountable to people

The biggest criticism Google Duplex received was whether or not it was ethical to mimic a real human without letting other humans know. I’m glad that this principle just states that “technologies will be subject to appropriate human direction and control”, since it doesn’t discount the possibility of building human-like AIs in the future.

An AI that makes a phone call on our behalf must sound as human as possible, since it’s the best way of ensuring a smooth interaction with the person on the other side. Human-like AIs shall be designed with respect, patience and empathy in mind, but also with human monitoring and control capabilities.

5. Incorporate privacy design principles

When the convenience created by AI intersects with our personal feelings or private data, a new concern is revealed: our personal data can be used against us. Cambridge Analytica’s incident, where personal data was shared with unauthorized third parties, magnified the problem by jeopardizing user’s trust in technology.

Google didn’t use many words on this principle, probably because it’s the most difficult one to clarify without directly impacting their business model. However, it represents the biggest tech challenge of the decade, to find the balance between giving up your privacy and getting a reasonable benefit in return. Providing “appropriate transparency and control over the use of data” is the right mitigation, but it won’t make us less uncomfortable when an AI knows the most intimate details about our lives.

Read the source post at Geek on record.

To Share or Not to Share: That is the Big Data Question

Between the disclosures this year about Facebook’s lax data sharing policies and the European Union’s GDPR (General Data Protection Regulation), a lot of people are talking about data privacy and consumer rights. How much data should you share as a consumer with companies like Facebook or Google? But what about businesses? Enterprise organizations may be […]

Between the disclosures this year about Facebook’s lax data sharing policies and the European Union’s GDPR (General Data Protection Regulation), a lot of people are talking about data privacy and consumer rights. How much data should you share as a consumer with companies like Facebook or Google?

But what about businesses?

Enterprise organizations may be dealing with their own data privacy dilemma — should they share their corporate data with partners or with vendors or with some other organization? If so, what data is OK to share, and what should they keep as private and proprietary? After all, data is the new oil. Amazon, Facebook, and Google have all built multi-billion dollar companies by collecting and leveraging data.

Although it is one of the top assets a company may have, there may be compelling reasons to share data, too. For instance, leading edge cancer centers could potentially speed up and advance society’s effort to cure cancer if they shared the data that each of them collected. But sharing it with a competitor could also erode their own competitive edge in the market.

Organizations may also be considering participation in a vendor program such as one under development at SAP called Data Intelligence that will anonymize enterprise customer data and allow those customers to benchmark themselves against the rest of the market.

“People are realizing that the data they have has some value, either for internal purposes or selling to a data partner, and that is leading to more awareness of how they can share data anonymously,” Mike Flannagan of SAP told InformationWeek in an interview earlier this year. He said that different companies are at different levels of maturity in terms of how they think about their data.

Even if you share data that has been anonymized in order to train an algorithm, the question remains whether you are giving away your competitive edge when you share your anonymized data assets. Organizations need to be careful.

“Data is extremely valuable,” said Ali Ghodsi, co-founder and CEO of Databricks (the big data platform with its origins offering hosted Spark) and an adjunct professor at the University of California, Berkeley. In Ghodsi’s experience, organizations don’t want to share their data, but they are willing to sell access to it. For instance, organizations might sell limited access to particular data sets for a finite period of time.

Data aggregators are companies that will create data sets to sell by scraping the web, Ghodsi said.

Then there are older companies that may have years or decades of data that have not been exposed yet to applied AI and machine learning, Ghodsi said, and those companies may hope to use those gigantic data sets to catch up and gain a competitive edge. For instance, any retailer with a loyalty card may have aggregated data over 10 or 20 years.

In Ghodsi’s experience, organizations want more data, but they are unwilling to share it, sometimes even within their own organizations. In many organizations, IT controls access to the data and may not always be willing to say yes to all the requests from data scientists in the line-of-business areas. That’s among the topics in a December 2017 paper co-authored by Ghodsi and other researchers from UC Berkeley titled A Berkeley View of Systems Challenges for AI. Ghodsi said that the group is doing research to find ways in which you can incentivize companies to share more of their data.  One of the ways is in the model itself — the machine learning model is a very compact summary of all the data.

Read the source article in Information Week.

Harnessing Technology To Kill User Privacy

Somebody mentioned it to me some time back and I laughed then. The day you decided to come online you actually accepted “No to privacy”. Now its even worse as we can even use radio signals to see through walls so no need to be online. Signals now can be used to monitor a person’s precise movements through a solid wall. What else and  how far we will go cant answer now. But all these development for sure has its pros and cons sides.

The post Harnessing Technology To Kill User Privacy appeared first on Vinod Sharma’s Blog.

Source

Somebody mentioned it to me some time back and I laughed then. The day you decided to come online you actually accepted "No to privacy". Now its even worse as we can even use radio signals to see through walls so no need to be online. Signals now can be used to monitor a person’s precise movements through a solid wall. What else and  how far we will go cant answer now. But all these development for sure has its pros and cons sides.

The post Harnessing Technology To Kill User Privacy appeared first on Vinod Sharma's Blog.

Source

Deep Learning: It’s Time for AI to Get Philosophical

By Catherine Stinson, postdoctoral scholar at the Rotman Institute of Philosophy, University of Western Ontario, and former machine-learning researcher I wrote my first lines of code in 1992, in a high school computer science class. When the words “Hello world” appeared in acid green on the tiny screen of a boxy Macintosh computer, I was […]

By Catherine Stinson, postdoctoral scholar at the Rotman Institute of Philosophy, University of Western Ontario, and former machine-learning researcher

I wrote my first lines of code in 1992, in a high school computer science class. When the words “Hello world” appeared in acid green on the tiny screen of a boxy Macintosh computer, I was hooked. I remember thinking with exhilaration, “This thing will do exactly what I tell it to do!” and, only half-ironically, “Finally, someone understands me!” For a kid in the throes of puberty, used to being told what to do by adults of dubious authority, it was freeing to interact with something that hung on my every word – and let me be completely in charge.

For a lot of coders, the feeling of empowerment you get from knowing exactly how a thing works – and having complete control over it – is what attracts them to the job. Artificial intelligence (AI) is producing some pretty nifty gadgets, from self-driving cars (in space!) to automated medical diagnoses. The product I’m most looking forward to is real-time translation of spoken language, so I’ll never again make gaffes such as telling a child I’ve just met that I’m their parent or announcing to a room full of people that I’m going to change my clothes in December.

But it’s starting to feel as though we’re losing control.

These days, most of my interactions with AI consist of shouting, “No, Siri! I said Paris, not bratwurst!” And when my computer does completely understand me, it no longer feels empowering. The targeted ads about early menopause and career counselling hit just a little too close to home, and my Fitbit seems like a creepy Santa Claus who knows when I am sleeping, knows when I’m awake and knows if I’ve been bad or good at sticking to my exercise regimen.

Algorithms tracking our every step and keystroke expose us to dangers much more serious than impulsively buying wrinkle cream. Increasingly polarized and radicalized political movements, leaked health data and the manipulation of elections using harvested Facebook profiles are among the documented outcomes of the mass deployment of AI. Something as seemingly innocent as sharing your jogging routes online can reveal military secrets. These cases are just the tip of the iceberg. Even our beloved Canadian Tire money is being repurposed as a surveillance tool for a machine-learning team.

For years, science-fiction writers have spelled out both the technological marvels and the doomsday scenarios that might result from intelligent technology that understands us perfectly and does exactly what we tell it to do. But only recently has the inevitability of tricorders, robocops and constant surveillance become obvious to the non-fan general public. Stories about AI now appear in the daily news, and these stories seem to be evenly split between hyperbolically self-congratulatory pieces by people in the AI world, about how deep learning is poised to solve every problem from the housing crisis to the flu, and doom-and-gloom predictions of cultural commentators who say robots will soon enslave us all. Alexa’s creepy midnight cackling is just the latest warning sign.

Read the source article at the Globe and Mail.

DATA – Blue Ocean Shift Strategy (Boss)

BOSS – Blue Ocean Shift Strategy can actually help and create vision to focus on areas such as AI, blockchain for education, health & agriculture, create ecosystems using BigData analytics and IoT. To capture a quick snapshot of this strategy, cer…

BOSS – Blue Ocean Shift Strategy can actually help and create vision to focus on areas such as AI, blockchain for education, health & agriculture, create ecosystems using BigData analytics and IoT. To capture a quick snapshot of this strategy, certainly, Big Data appears to be most effective and efficient driver for Blue Ocean Strategy. Based on a limited set....

2018 Year of Intelligence – Artificial & Augmentation

Year 2018 will be known as year of Artificial Intelligence and Intelligence Augmentation for sure.We used to think artificial intelligence was a silly sci-fi concept but when you really look into it, it seems like its been slowly encroaching into most …

Year 2018 will be known as year of Artificial Intelligence and Intelligence Augmentation for sure.We used to think artificial intelligence was a silly sci-fi concept but when you really look into it, it seems like its been slowly encroaching into most areas of everyday life! AI would become just a computer inside the robot or another software or a brain siting out of human body.

Demystifying AI, Machine Learning and Deep Learning

This was the first serious proposal in the philosophy of artificial intelligence, which can be explained as: a science developing technology to mimic humans to respond in a circumstance. In simple words AI involves machines that behave and think like h…

This was the first serious proposal in the philosophy of artificial intelligence, which can be explained as: a science developing technology to mimic humans to respond in a circumstance. In simple words AI involves machines that behave and think like humans i.e Algorithmic Thinking in general. Computers start simulating the brain’s sensation, action, interaction, perception and cognition abilities.

World Wide Data Wrestling

Big data presents a tremendous opportunity for enterprises across multiple industries especially in the tsunami like data flow industry of “Payments”. FinTech, InsureTech, MedTech are major data generating industries i.e massive group of factories. Acc…

Big data presents a tremendous opportunity for enterprises across multiple industries especially in the tsunami like data flow industry of “Payments”. FinTech, InsureTech, MedTech are major data generating industries i.e massive group of factories. According to some data from Google it shows technology based innovative insurance companies

FinTech – Machine Learning and Recommenders

These terms are used to from BI Intelligence, To illustrate the various applications of AI in eCommerce and use case studies to show how this technology has benefited merchants/ecommerce service providers. Different consumers have varying, and often ve…

These terms are used to from BI Intelligence, To illustrate the various applications of AI in eCommerce and use case studies to show how this technology has benefited merchants/ecommerce service providers. Different consumers have varying, and often very specific, requirements for product, needs, expect performance, cost of consumption, silicon wafer thin kind of cost for best thing in mind and other parameters.

AI in FinTech

Bringing Artificial intelligence to make FinTech better, demystified and simple. How FinTech intelligence will become better with machine learning. Artificial Intelligence is a field that includes everything that is associated with the data (cleansing,…

Bringing Artificial intelligence to make FinTech better, demystified and simple. How FinTech intelligence will become better with machine learning. Artificial Intelligence is a field that includes everything that is associated with the data (cleansing, preparation, analysis and many more), Learning processes to describe, diagnose, predict and prescribe with use of AI subfields like Machine Learning, Deep earning and Neural networks. Machine learning is a field of Artificial Intelligence, which is allowed to software applications for making accurate results.