BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Date iCal//NONSGML kigkonsult.se iCalcreator 2.20.2//
METHOD:PUBLISH
X-WR-CALNAME;VALUE=TEXT:Eventi DIAG
BEGIN:VTIMEZONE
TZID:Europe/Paris
BEGIN:STANDARD
DTSTART:20171029T030000
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20170326T020000
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
RDATE:20180325T020000
TZNAME:CEST
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:calendar.12454.field_data.0@www.ugovricerca.uniroma1.it
DTSTAMP:20260408T074636Z
CREATED:20170923T070732Z
DESCRIPTION:Speaker: Randy Goebel (University of Alberta) Title:  What does
  logic and abduction have to do with deep visual explanation? Abstract: Ar
 tificial Intelligence research that has exploited and adapted the foundati
 ons of logic and scientific reasoning has significantly contributed to a l
 iterature on the creation and management of formal theories\, including th
 e development of reasoning architectures that support the concept of expla
 nation. A good representative of this work is work on the development of w
 hat could be generally labeled as non-monotonic abductive reasoning system
 s\, including typical applications in diagnostic reasoning\, theory format
 ion\, and causality.  But recent impactful advances in machine learning\, 
 especially deep learning\, have catalyzed new enthusiasm for Artificial In
 telligence performance applications\, where mathematically sophisticated s
 upervised learning algorithms have demonstrated supra-human performance.  
 But two significant problems ensue: 1) the learned models are mostly if no
 t wholly opaque and can not be inspected (no debugging\, no error analysis
 \, no explanation of unanticipated model predictions)\, and 2) models can 
 not be imbued (or easily so) with background knowledge to accelerate the k
 nowledge learned by the models.We provide a three component framework for 
 what we ultimately label “deep visual explanation.”  Motivated by a recent
  rebirth of ideas arising from problems 1\, 2 above\, we structure the ide
 a of “explainable AI” into three components: first\, the foundations of ex
 plainability arising from non-monotonic abductive reasoning\; second\, a s
 ketch of a logical theory of visualization\, which provides a basis for co
 nsolidating complex n-dimensional data into a form in which the human visu
 al system can derive plausible inferences\; and third\, and finally\, a sk
 etch of a system based on the first two components\, which we call deep vi
 sual explanation. We describe an instance of this third component as a met
 hod of instrumenting deep learned but opaque models so that one can observ
 e a visual explanation of a deep model’s internal behaviour. Bio: R.G. (Ra
 ndy) Goebel is currently professor of Computing Science in  the Department
  of Computing Science at the University of Alberta\, Associate Vice Presid
 ent (Research) and Associate Vice President (Academic)\, and principle inv
 estigator in the Alberta Machine Intelligence Institute (AMII). He receive
 d the B.Sc. (Computer Science)\, M.Sc. (Computing Science)\, and Ph.D. (Co
 mputer Science) from the Universities of Regina\, Alberta\, and British Co
 lumbia\, respectively.Professor Goebel's theoretical work on abduction\, h
 ypothetical reasoning and belief revision is internationally well know\, a
 nd his recent research is focused on the formalization of visualization\, 
 with applications in several application areas including web mining\, opti
 mization\, natural language processing\, legal reasoning\, precision healt
 h\, and intelligent transportation.Randy has previously held faculty appoi
 ntments at the University of Waterloo\, University of Tokyo\, Multimedia U
 niversity (Kuala Lumpur)\, Hokkaido University (Sapporo)\, visiting resear
 cher engagements at National Institute of Informatics (Tokyo)\, German Res
 earch Centre for Artificial Intelligence (DFKI\, Germany)\, and National I
 CT Australia (NICTA\, now Data61)\, and is actively involved in collaborat
 ive research projects in Canada\, France\, Japan\, China\, and Germany. Sp
 eaker: David Israel (Stanford Research Institute) Title: Some Thoughts on 
 RoboEthics  -- From a Non-Roboticist and A Complete Amateur at Ethics Abst
 ract: A lot of very smart people (Stephen Hawking\, Elon Musk\, Peter Norv
 ig\, Stuart Russell and many others) have expressed deep concerns about th
 e existential threat to our species (!) that may be posed by the developme
 nt of 'super-intelligent' machines.  In this talk\, I want to address what
  I consider much more immediate\, and much less science-fiction-inspired w
 orries having to do with autonomous systems\, in particular with even part
 ially autonomous weapons systems.  Besides frightening my audience\, I hop
 e to get them to think about some of the issues -- both policy-oriented an
 d ethical -- that autonomous systems raise.  Bio: David J. Israel is a Pri
 ncipal Scientist Emeritus at Artificial Intelligence Center at SRI\, where
  he been the Director of the Natural Language Program Artificial Intellige
 nce Center Information and Computing Sciences Division SRI International. 
 He has worked in a number of areas in AI\, including Knowledge Representat
 ion and Reasoning\, Theory of (Rational) Action and various parts of Natur
 al Language Processing\, such as Formal Semantics and the Theory and Desig
 n of Machine Reading systems. 
DTSTART;TZID=Europe/Paris:20170928T160000
DTEND;TZID=Europe/Paris:20170928T160000
LAST-MODIFIED:20191008T082902Z
LOCATION:Via Ariosto 25\, Aula B2
SUMMARY:Distinguish lecture in AI by Randy Goebel (U. Alberta) and David Is
 rael (SRI) - Randy Goebel (U. Alberta) and David Israel (SRI)
URL;TYPE=URI:http://www.ugovricerca.uniroma1.it/node/12454
END:VEVENT
END:VCALENDAR
