Dr. Stuart Russell Answers AIAI Executive Series Questions

I am honored to interview Dr. Russell and get his thoughts. He is a leading expert in Artificial Intelligence, Professor of Computer Science and Smith-Zadeh Professor in Engineering, University of California, Berkeley, and author of several articles and books. While for pragmatic expediency reasons we have a tendency to jump into machine learning directly, I strongly encourage my students to first read Dr. Russell’s (coauthored with Peter Norvig) book Artificial Intelligence: A Modern Approach. The book provides the conceptual foundation upon which machine learning knowledge can be built. Professor Naqvi

Professor Naqvi: What factors led to this amazing idea of your institution? How can the world of AI benefit from your research?

Dr. Stuart Russell: Artificial intelligence research is concerned with the design of machines capable of intelligent behavior, i.e., behavior likely to be successful in achieving objectives. The long-term outcome of AI research seems likely to include machines that are more capable than humans across a wide range of objectives and environments. This raises a problem of control: given that the solutions developed by such systems are intrinsically unpredictable by humans, it may occur that some such solutions result in negative and perhaps irreversible outcomes for humans. CHCAI’s goal is to ensure that this eventuality cannot arise, by refocusing AI away from the capability to achieve arbitrary objectives and towards the ability to generate provably beneficial behavior.

The AI community will benefit because this new goal is much less likely to cause problems that could lead to a partial or complete shutdown of AI research. Systems that make mistakes that harm users, either financially or physically, can have negative consequences for an entire sector of the AI enterprise.

Professor Naqvi: If robots learn from us, from observing us, and from reading about us – wouldn’t they develop the same biases and prejudices that limit us?

Dr. Stuart Russell: It is clear that prejudices hurt others, and the value alignment process for machines should take into account the values of all humans.
Thus, a prejudiced value system is not one that machines can consistently adopt.

Professor Naqvi: Sanders and Tewkesbury (Sanders and Tewkesbury, 2015)
contend that “As AI develops, we might have to engineer ways to prevent consciousness in them just as we engineer other systems to be safe.” You seem to be doing the opposite. Why?

Dr. Stuart Russell: I don’t think we’re proposing the opposite (i.e., engineering consciousness into machines). We’re just proposing the goal of building machines that will eventually learn the right objectives. We have absolutely no idea how to design or test for consciousness in machines. And let me reiterate, consciousness isn’t the problem, it’s competence.

Professor Naqvi: Your center will certainly require multidisciplinary leadership. How do you plan to include experts from multiple backgrounds?

Dr. Stuart Russell: We have experts from cognitive science and game theory, and will be adding affiliate faculty in philosophy, economics, and possibly sociology. As we
learn to talk the same language and develop new research projects, we will expend the Center’s funding to accommodate them.

Professor Naqvi: The literature or data used to train AI based systems could be limited or tilted towards one value system (e.g. Western values), hence even a fair algorithm will have no choice but to learn the values for which maximum data is available. How would you prevent that?

Dr. Stuart Russell: A good Bayesian understands that data come from a data generating process that may have over- or under-representation of all kinds. (For example, the great majority of newspaper articles focus on just a few thousand people in the world.) If handled properly this does not introduce a bias, but there will inevitably be greater uncertainty associated with the values and preferences of people who are simply not visible in the data.
If the machine’s decisions might impact those people then it has an incentive to gather more information about them.

It’s also worth pointing out that in a real sense it’s not the robot’s value function at all. The robot’s objective is just to help humans realize their values, and what it learns is what those values are.

So, a robot chef in a vegan household isn’t a vegan robot, it’s a robot that knows the humans it is cooking for are vegans. If they lend the robot to their neighbors who eat meat, the robot won’t refuse to cook the steak on moral grounds!

Professor Naqvi: Do you feel that a bit of subjectivity (qualitative-ness) and creativity are key to learning from observed behavior. If yes, would AI be able to adapt to qualitative and creative learning frameworks? If no, how would its bias-free and purely quantitative learning be different than typical human learning?

Dr. Stuart Russell: Learning anything complex generally requires acquiring, in some form, discrete or qualitative structures as well as continuous parameters.
(For example, with deep learning, experimenters try all sorts of network structures — this learning process is sometimes called “graduate student descent”.) It’s also the case that optimal decisions within decision theory can be purely qualitatively determined — for example, you can decide that you’d rather crash into a solid concrete wall at 5mph than 75mph, without calculating any probabilities and utilities.
So there are many aspects of human learning and decision making that are good ideas for machines too.

The main difference from humans, I think, is that the robot has no
preferences of its own; it should be purely altruistic. This is not very common in humans.

Professor Naqvi: Humans have a built-in feedback loop that reinforces pain and pleasure stimulus that corresponds to the basic hardwired survival mechanism in biological entities (in this case humans). A lot of learning takes place as a consequence of that feedback loop. How do you integrate that, i.e. if you do, in your AI research?

Dr. Stuart Russell: Unlike Asimov, I believe there is no reason for the robot to have any
intrinsic preference for avoiding harm to itself. This preference should
exist only as a subgoal of helping humans, i.e., the robot is less useful to people if it is damaged, and its owner would be out of pocket and perhaps upset, so *to that extent* (and no more) it has an obligation to avoid damage. The self-sacrifice by TARS in Interstellar is an excellent example of how robots should behave, in cases where such extreme measures are indicated.

Professor Naqvi: Thank you so much. I truly appreciate your time.

References:
Sanders, D. & Tewkesbury, G. (2015) It Is Artifical Idiocy That Is Alarming, Not Artificial Intelligence

Learn About Professor Naqvi’s Revolutionary Work

COGNITIVE (ARTIFICIAL INTELLIGENCE) TRANSFORMATION EXPERT: PROFESSOR Al (A.I.) NAQVI

Al NaqviProfessor Al (A.I.) Naqvi is a leading expert in transforming companies from the “e” to the “ai” era. He specializes in total and integrated business transformation by using artificial intelligence. Professor Naqvi developed the first and most comprehensive body of knowledge for AI in Corporate Strategy, AI in Finance, AI in Marketing, AI in HR, and AI in Supply Chain Management. His work has been recognized by world’s leading professional societies, universities, and companies. Over 300 companies have benefited from Naqvi’s research. He is widely published in both academic and practitioner publications. Al Naqvi’s professional research interests are broad, and include artificial intelligence, applied AI, robotic process automation, deep learning, complex adaptive systems, cognitive organizations and leadership, and strategic cognitive transformation. He teaches several classes on Applied Artificial Intelligence, Deep Learning, RPA, and Cognitive Transformation at the American Institute of Artificial Intelligence. Professor Naqvi is passionate about teaching people about the potential and practical applications of artificial intelligence. He calls it reskilling and re-intellectualization of the workforce. He has designed several products using Deep Neural Networks. Known for making artificial intelligence fun and easy to understand, he has appeared in various conferences and shows all over the world. He lives in the greater Washington DC area.

Professor Naqvi’s work is focused on applying deep learning to study advanced models of financial engineering, portfolio management, and asset management. He uses a unique approach that combines complex adaptive systems, behavioral finance, game theory, and real options to design state of the art artificial intelligence system to predict stock movements and financial market performance.

Published Books

The Beaver Bot of Yellowstone Pure Play Leadership for the Artificial Intelligence RevolutionThe Beaver Bot of Yellowstone: Pure-Play Leadership for the Artificial Intelligence Revolution
by Al Naqvi (Author), J. Mark Munoz (Author)

As the artificial intelligence revolution sweeps through the global economy, nothing is more important than pure play leadership. Using a fable, “The Beaver Bot of Yellowstone” guides business leaders on how to lead their firms through the mysterious and complex cognitive transformation. Anyone can master the game with pure play leadership rules. In a reading that lasts no more than a one-way Chicago–DC flight, “The Beaver Bot of Yellowstone” helps leaders who don’t have the time or the patience to read thousands of pages of research and theses to get to the bottom of the leadership principles for the cognitive era. read more >

Artificial Intelligence in Human Resources: Human in the MachineArtificial Intelligence in Human Resources: Human in the Machine (AI in Business Book 110)
by Al Naqvi (Author)

As the cognitive revolution sweeps the business world, this book is the first guide for Human Resources professionals on how to navigate through the artificial intelligence transformation. A non-technical book, it is meant for HR strategy leaders. A concise guide on how to develop an integrated vision for the Human Resources department and on how to formulate a plan for leading the artificial intelligence revolution, this book is the first book on the topic. The book presents a comprehensive plan and orchestrates a strategy to embrace and lead. The book introduces AIHR (Artificial Intelligence Human Resources) and the concept of MAHRM (Machine and Human Resources Management). read more >

Business Strategy in the Artificial Intelligence EconomyBusiness Strategy in the Artificial Intelligence Economy
by J. Mark Munoz (Author), Al Naqvi (Author)

Technological breakthroughs relating to artificial intelligence has redefined business operations worldwide. For example, the ways in which data is captured, processed, and utilized to optimize customer interactions has grown by leaps and bounds. The change is redefining the structural dynamics of business strategy, economic theory, and management concepts. Leading technology companies around the world have expanded their research in artificial intelligence. With IBM’s launch of Watson, a new cognitive era has started. Investment firms have backed numerous emerging artificial intelligence companies. Meanwhile, there is paucity of academic and business research on the subject. This book project is a pioneering examination of how artificial intelligence is transforming the contemporary business strategy.

Taming The Ticker: How to Link Trading Floor to Shop Floor to Improve a Company’s Stock Value: A Comprehensive Manual for Shareholder Value Creation in the 21st CenturyTaming The Ticker: How to Link Trading Floor to Shop Floor to Improve a Company’s Stock Value: A Comprehensive Manual for Shareholder Value Creation in the 21st Century
by Al Naqvi (Author)

Taming the Ticker uses a behavioral finance approach to create shareholder value by understanding, incorporating and actively using shareholder expectations in both operational and financing decisions. An expectations centric framework for fundamentals based investing.

Upcoming Books by Professor Naqvi in 2018

Rivet: AI in Business (Publisher – ANTHEM)
Al Naqvi and Mark Munoz

Cognitive Competitive Intelligence (Publisher – TBD)
Al Naqvi and Nan Bulger (CEO of SCIP)

Artificial Intelligence in Policy and Government (Publisher: ANTHEM)
Edited by Al Naqvi and Mark Munoz

Artifical Intelligence – A Primer for Business Executive (Publisher: TBD)
Al Naqvi

Artificial Intelligence and Supply Chain Management (Publisher: TBD)
Al Naqvi and Coauthor

Techcorachment (Publisher: TBD)
Al Naqvi

Artificial Intelligence and Marketing (Publisher: TBD)
Al Naqvi and Angela Long (AMA DC)

Book Chapters by Professor Naqvi

Chapter: Inventing Consciousness: Beyond Business Intelligence –  Global Business Intelligence by Business Expert Press 2018

Chapter:  Protecting Value via Information Management: Managerial Forensics , Business Expert Press 2016

Journal Articles by Porfessor Naqvi

Artificial Intelligence in Business & Strategy: Journal of AI in Business, Policy, and Economy (Upcoming)

Competitive Dynamics of AI Economy: The Wicked problem of Cognitive Competition – Journal of Economics Library June 2017

Responding to the Will of the Machine, Leadership in the age of AI, Journal of Economic Bibliography September 2017

Artificial Intelligence and Urbanization: the Rise of Elysium City; Journal of Political Economy, March 2017 (Co-author Mark Munoz)

Al Naqvi Speaking & Interviews

Interview by SCIP

Speaking at SCIP

Speaking at Ethical Corp NY Meeting

Radio Interview

Conference and Media Appearances of Prof. Naqvi

Innovating for Value in Healthcare

San Diego , March 7th , 2017 , Frost & Sullivan
https://ww2.frost.com/files/3114/8898/4501/Medtech17_FullBrochure_030617.pdf
Prof. Naqvi talked about the role of AI in healthcare and medical equipment technologies. Later published an article http://aipost.com/transforming-healthcare-artificial-intelligence/

SCIP Strategy and Competitive Intelligence

Atlanta may 15-18 2018
https://www.shiftcentral.com/top-5-key-take-aways-scip-2017
https://www.linkedin.com/pulse/scip-17-summary-developingengaging-modern-workforce-andrew-garrett/
http://aipost.com/wp-content/uploads/2017/08/An-Interview-with-Al-Naqvi_V2Lakshika-Trikha.pdf

Insight

AI in Healthcare Summit

June 26-27, 2017 • Marines’ Memorial Club • San Francisco, CA
http://www.insightxnetwork.com/uploads/8/3/2/0/83200454/ien-ai_in_healthcare_brochure_r4.pdf
Al Naqvi presented the state of the industry in AI in Healthcare.

APICS Chicago Conference June 2017

AI in Supply Chain Management

Prof Naqvi talked about the AI in supply chain management

Al Naqvi was interviewed on radio

https://soundcloud.com/theworkforceshow/interview-with-al-naqvi-president-and-ceo-of-the-american-institute-of-artificial-intelligence

SCIP Portugal

https://www.youtube.com/watch?v=y8GK8xr7-Vc

Insight

AI in Healthcare Summit

January 18-19, 2018
Harvard Club | Boston, MA
https://www.insightxnetwork.com/ai-healthcare-summit.html
http://www.insightxnetwork.com/uploads/8/3/2/0/83200454/ien-ai_in_healthcare_brochure_r5__1_.pdf

Insight

AI and Machine Learning

January 25-26, 2018 • The Standard Club • Chicago, IL
https://www.insightxnetwork.com/uploads/8/3/2/0/83200454/artificial_intelligence___machine_learning_101__t109_-r4.pdf

AI Keynote: Impact of AI on business and society

Leading experts share their views on the potential positive and negative impacts of AI

With AI moving at a rapid pace, does your business understand how and where it’ll impact its systems and operations? How can business ensure the impending large-scale change has a positive impact on society?

american institute of artificial intelligenceAl Naqvi
President
American Institute of Artificial Intelligence

shopifyMichael Perry
Director of Kit and Product
Shopify

dellBrian Reaves
Chief Diversity & Inclusion Officer
Dell

datamaranSusanne Katus
VP Business Development
Datamaran

ethical corporationKrina Amin
Head of US Strategy
Ethical Corporation