by admin

Machine Learning Site Reliability

How can we build systems that will perform well in the presence of novel, even adversarial, inputs? What techniques will let us safely build and deploy autonomous systems on a scale where human monitoring becomes difficult or infeasible? Answering these questions is critical to guaranteeing the safety of emerging high stakes applications of AI, such as self-driving cars and automated surgical assistants.

Similarly, Machine Learning will help reshape the field of Statistics, by bringing a computational perspective to the fore, and raising issues such as never-ending learning. Of course both Computer Science and Statistics will also help shape Machine Learning as they progress and provide new ideas to change the way we view learning. DrupalCon Vienna 2017: Automation and Machine Learning with Site Reliability Engineering 'Site reliability engineering (SRE) is a discipline that incorporates aspects of software engineering and applies that to operations whose goals are to create ultra-scalable and highly reliable software systems.'


SiteThis workshop will bring together researchers in areas such as human-robot interaction, security, causal inference, and multi-agent systems in order to strengthen the field of reliability engineering for machine learning systems. We are interested in approaches that have the potential to provide assurances of reliability, especially as systems scale in autonomy and complexity.
We will focus on five aspects — robustness, awareness, adaptation, value learning, and monitoring -- that can aid us in designing and deploying reliable machine learning systems. Some possible questions touching on each of these categories are given below, though we also welcome submissions that do not directly fit into these categories.
Learning

Learning Site Latina

Learning

Learning Site For Kids

  • Robustness: How can we make a system robust to novel or potentially adversarial inputs? What are ways of handling model mis-specification or corrupted training data? What can be done if the training data is potentially a function of system behavior or of other agents in the environment (e.g. when collecting data on users that respond to changes in the system and might also behave strategically)?
  • Awareness: How do we make a system aware of its environment and of its own limitations, so that it can recognize and signal when it is no longer able to make reliable predictions or decisions? Can it successfully identify “strange” inputs or situations and take appropriately conservative actions? How can it detect when changes in the environment have occurred that require re-training? How can it detect that its model might be mis-specified or poorly-calibrated?
  • Adaptation: How can machine learning systems detect and adapt to changes in their environment, especially large changes (e.g. low overlap between train and test distributions, poor initial model assumptions, or shifts in the underlying prediction function)? How should an autonomous agent act when confronting radically new contexts?
  • Value Learning: For systems with complex desiderata, how can we learn a value function that captures and balances all relevant considerations? How should a system act given uncertainty about its value function? Can we make sure that a system reflects the values of the humans who use it?
  • Monitoring: How can we monitor large-scale systems in order to judge if they are performing well? If things go wrong, what tools can help?