Meet the researcher – Sid Narayanaswamy
Our series meeting Hub researchers continues with the University of Edinburgh’s Siddharth (Sid) Narayanaswamy, a reader in explainable AI, who recently used doughnut eating, celebrities and handbags to illustrate a talk on generative modelling and representation learning at UCL.
Sid Narayanaswamy,
University of Edinburgh
“It really is the case that we’re still looking to find ways of establishing common ground between humans and machines..”
Tell us about yourself and your work
I work on quite a broad and cross-disciplinary set of topics, with the aim of building computational models for perception, cognition and action based on how humans do things. It was already an active area of research, but activity has surged with all the Gen AI stuff that's exploded in the past few years. This has brought new perspectives on these models which are, up to a point, adept at doing what humans do, but in quite different ways.
I have a small group in the School of Informatics at the University of Edinburgh working at the intersection of machine learning, computer vision, natural-language processing, cognitive science, robotics and neuroscience.
My group is interested in compositionality and representations that are inherently compositional. We build on this - something people are quite adept at - to establish common ground between machines and humans through interaction. We believe this has quite stark implications for building robust, generalisable and interpretable AI and ML systems.
We are looking for new PhD and post doc recruits to support us in this work, so watch this space.
When you work with the hub, which working group are you part of?
I am a co-lead for the Methodology working group, focusing on the algorithms and theoretical foundations for Gen AI. At Edinburgh, as part of this working group, we've been looking at ways to develop more efficient and effective models by leveraging some of the insights from humans that my group and other colleagues have been working on. For example, looking at whether we can use compositionality and structure induction to build models that are more controllable and to make it easier to interpret their behaviour and explain how they did something.
I am also a member of the Multi Modal Models working group, tracking with our group's interests in perception and cognition and developing models that are more effective at processing perceptual data from different modalities.
What do you think that the AI hubs bring to the research ecosystem?
There are two parts to it. Firstly, they are bringing together the most prominent AI researchers in the UK who work on similar things in way that maybe would not have happened without this initiative on part of UKRI or the government. This allows those researchers to widen their networks. The other side of it is how it provides a point of contact for people outside those immediate research clusters to find out what is going on and potentially work with the hubs.
How did you end up as an AI researcher?
I started off while an undergrad doing a lot of hobbyist robotics, there wasn’t anything specifically AI related in my undergrad degree it was mostly signals, systems and communication engineering. Then I started my PhD at Purdue and worked on building robots that were slightly smarter, which involved trying to learn more about how humans did things. I started off with robots that played games with each other, and we would try to take research about how children learnt how to do things and incorporate them into the robot. I then moved on to language and vision, which involved interaction between robots and humans.
When I finished my PhD, I got involved a bit in Psychology at Stanford, looking at more human-inspired models for how perception and cognition ought to be working. Then I went to Oxford where we were building more complex generative models, looking at ways to keep hold of this general human perspective on how to build models, but also scale them up. We were among the first to really get into multimodal generative models, around 2016, before the big Gen AI push for multimodality really came about. I then ended up at Edinburgh doing explainable AI, as it involves a lot of interaction with humans and understanding what and how humans do things and building these into robots so they can be better at interacting with humans.
What does your research on robots and human behaviours tell you about current AI tools?
The current AI systems are spectacular, but they also don't do the sort of reasoning that humans do. And that becomes apparent when you go into situations where you need particular types of interactive behaviour.
You can have a conversation with ChatGPT about writing your history essay, that's fairly straightforward. But if you wanted to use ChatGPT to help somebody who was, say, visually impaired and wanted to get help for how to move around a particular room or, navigate the tube, for example, it is not going to work well because it's not set up to behave like that. It goes round in circles. That is the sort of reasoning that we still don't have with the current AI systems, which if you're in the business of building models that try to think and behave like humans, then you still got quite a big challenge in front of you.
It really is the case that we’re still looking to find ways of establishing common ground between humans and machines.
Tell us something about yourself that is surprising or unusual?
I basically have no social media presence. I've got no social media accounts or anything. So, I guess that would be surprising! :)