News: CFTIRC Online Bulletin Board Launched (Pentesting & DFIR Miner).
Please register an account to access our community's posts.

Login  |  Register

Author Topic: Postdoc Position in Trustworthy AI @ UK  (Read 711 times)

BigBrother

  • Administrator
  • Sr. Member
  • *****
  • Posts: 408
  • Karma: 2000
  • You Posted! You Posted! : Earned for posting at least 1 time.
    Have something to say! Have something to say! : Earned for posting at least 10 times.
    Talkative! Talkative! : Earned for posting at least 100 times.
Postdoc Position in Trustworthy AI @ UK
« on: January 18, 2021, 02:51:32 am »
We invite applications for a postdoctoral research associate position based in the School of Informatics, University of Edinburgh. This two-year position is open to someone with a background and PhD in AI, Machine Learning, HCI or similar area, with practical experience of an application domain such as AI-based medical diagnostics or autonomous vehicles. The ideal candidate would also have some prior exposure to methods for explainability, ML debugging, and AI ethics.

The central purpose of the job will be to investigate issues around responsibility in machine learning-enabled AI systems, such as in the areas of AI-based medical diagnosis and semi-automated flight systems. A key challenge with these systems is in identifying and correcting systematic patterns of mistakes made by the ML models, and also by the team behind the model in terms of the development and validation processes. Through methods to address this, we would like to gain a deeper understanding of responsibility, accountability and liability gaps in AI system design, and propose approaches to improving the design, testing and post-deployment processes associated with these systems.

The TAS programme brings together research communities and key stakeholders to drive forward cross-disciplinary, fundamental research to ensure that autonomous systems are safe, reliable, resilient, ethical and trusted.

The appointee to this post on Trustworthy AI will be supervised by Prof Subramanian Ramamoorthy (Informatics, UoE), Principal Investigator for the Research Node. The appointee will be co-supervised by Prof Shannon Vallor (Edinburgh Futures Institute) and Dr Vaishak Belle (Informatics, UoE).

The candidate is expected to have a strong background in AI system design and a deep appreciation for development processes and human factors. The candidate should be familiar with a range of machine learning-based methods, preferably with some prior exposure to automation in a concrete application domain such as medical data analysis or automated vehicles. The ideal candidate will be able to engage both with the machine learning community and with researchers who study the development process through a qualitative approach, with emphasis on fairness, accountability and transparency. The ideal candidate would also have a strong track record of publication in leading conferences and journals in AI (e.g. IJCAI, AAAI, NeurIPS, ICML, ICLR), or HCI and FAccT (e.g. SIGCHI, ACM FAccT, AIES).

Please follow this link for the official advertisement and application process: https://elxw.fa.em3.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1001/job/385

Informal enquiries should be directed to Prof Ram Ramamoorthy in the School of Informatics.

---
Professor Subramanian Ramamoorthy
Chair of Robot Learning and Autonomy
Director, Institute of Perception, Action and Behaviour
School of Informatics | University of Edinburgh | http://rad.inf.ed.ac.uk/0
Member of Executive Committee | Edinburgh Centre for Robotics
Turing Fellow | Alan Turing Institute
--
Best Regards
CFTIRC Admin
https://www.acfti.org/cftirc-community