philip@philipenglish.com
[site-socials]

Philip English

Robotics Enthusiast, Director, Investor, Trainer, Author and Vlogger

SLAMCORE interview with Owen Nicholson

Hi guys, Philip English from philipenglish.com. Welcome to the Robot Optimized Podcast where we talk about everything robotics related. For our next episode, we have SLAMCORE led by Owen Nicholson who will talk about their leading software and robotics .

https://youtu.be/gl0hiegQfcM

Philip English (00:14):

Hi guys, Philip English. I am a robotics enthusiast, reporting on the latest business application of robotics and automation. And so today, we’ve got Slam Core and we’ve got Owen. He’s gonna give us a quick overview, of the technology down there. So I was also the CEO and co-founder. And, for any of you who haven’t come across slam before, it stands for simultaneous, mapping and we’ll got it wrong, apologies, simultaneous localization and mapping, and Slam Core like develop the algorithms that allow basically robots and machines to understand their space around them. So as more robots come out, then they have a sense of where they are and obviously they can interact with our environment. So yeah, no, I sort of, if I were in, just to give us like an intro and like an overview really like about yourself to start with, Owen if that’s, okay.

Owen Nicholson (01:20):

Sure. Awesome. Well, thanks a lot for the opportunity Phil, and thanks for the intro. So just to play back, I’m Owen, I’m the CEO at slam core. I’m also one of the original founders, and we’ve been going for about five years now. We originally span out from, Imperial college in the UK, one of the top, colleges in the world, founded by some of the absolute world leaders in the space. And it’s been, an incredible journey over the last five years, taking this technology and turning it into a real commercial products, which I’d love to tell you about all today.

Philip English (01:55):

Alright, thank you for that overview. And so you’re saying that, also you’re one of the co-founders so I was on the west side, so it’s is it 4 main co-founders or

Owen Nicholson (02:04):

So yes, two academic co-founders and then to full-time business founders as well. So, from the academic side, we have Prof. Andrew Davidson and Dr. Stefan Leutenegger, between them they’re two of the most respected, academics in this space. Probably most notably Prof. Davidson, who is one of the original founders of the concept and the real pioneers of slam particularly using cameras. So we’ll talk more about that, but this is really our particular flavor of slam is using vision. And he’s been really pushing that for the last one, nearly 20 years now. So incredible to have him as part of the founding team. And then, Dr. Leutenegger, who’s now at technical university Munich, who is another one of the real pioneers of vision for robotics. And then myself and, were with a full-time business side of things when we founded the company.

Philip English (03:01):

Right. Fantastic. So you’ve got quite an international sort of group of co-founders there. It sounds like you’ve got some mostly from an academic point of view, that our team that have been studying this and doing this technology for years. So that’s interesting,

Owen Nicholson (03:19):

Absolutely. And it’s one of those things. When you start with strong technical founders, then you can attract other great people into the space. So one of our first hires was our CTO, Dr. Pablo Alcantarilla who came from I robot. Great to be able to bring someone of that quality. And, he’s another one of the absolute real leaders in the space. But also with the real experience in industry. So he’s been a Toshiba has been, in iRobots and knows all about how do you get this stuff to work on low-cost hardware in the real world, and price to performance point. That really makes sense. And we’ve now reached our 33rd hire. So it’s been incredible and these we’ve got still a big technical team about 18 PhDs. I think I’m still about three quarters are still technical. So they either have a PhD or an extremely, in depth experience in software engineering and particularly embedded software engineering. But we’re also growing out our commercial and business side of the company as well over the last year and a half.

Philip English (04:25):

Fantastic. Yeah. Sounds like a phenomenal sort of growth over the five years to have such a strong team there. And, we were chatting about it. We were chatting before, so obviously you’re based down in Barra in central London, but you’ve also got another branch a bit out in now. What was that again?

Owen Nicholson (04:43):

Chiswick Sorry, kind of west London. So we have a couple of offices for the, for the team to work from. We have, I think on the last count, 17 nationalities now represented, within the company. So we sponsored a lot of international visas. We bring a lot of people into the UK to work for slam core, from all over the world. Most continents represented now. But I think this is just the way it goes. This the type of tech we’re working in is very specialist and we need the best of the best. So it’s the challenges these guys can girlfriend and, and girls can go from work in a DeepMind or Oculus if they wanted to. So we need to make sure that we attract them and retain them, which is something we’ve, we’ve been very successful with so far.

Philip English (05:28):

That’s right. And is what it’s all about getting the best team around you, like a sports team, you know, you want to get the best players, you know, to do the work. I mean, how did you find your team that started you, do you sort of regularly advertise or do you do a bit of headhunting for the guys? You got

Owen Nicholson (05:47):

The mixture of both of them having, having great people from the founding team means we can get a, we’ve got a good access to the network. So we do have a lot of inbound queries coming in for, we have very rigorous, interview processes. But we do, we use recruiters, we use headhunters particularly for some of the more commercial hires we’ve used kind of high-end head hunters to find these people because they’re very hard to find, once you do then bring them in, it needs to be, we need to make sure that this is a vision they really buy into. So, that’s, you know, we have, multiple ways in which we’ve attracted people over the years, but I’d say because we have such good quality team. It attracts other great people. So, it’s one of the real benefits

Philip English (06:35):

That’s right. So talent attracts talent, and we’ll see that those are the sort of guys who would like to know each other a lot within the space as well. And yet, like you mentioned vision there and we’ll get into vision a bit, my late later on, but I suppose I’m interested in the, I suppose, the problem or the issue to start with. I mean, Nikki, could you talk around or see, you know, I suppose the fundamental problem that you guys are, are trying to address?

Owen Nicholson (07:02):

Sure, sure. I think at the heart of it, we exist to help developers give their robots and machines, the ability to understand space, quite high level, but let’s start there. And ultimately we break this down into the ability for a machine to know its position, know where the objects are around them and what those objects are. So it’s coordinates it’s map and the, is it a person? Is it a door? Those are the three key questions that machines and particularly robots need to answer to be able to do the job that they’ve been designed for. And the way this has done normally is using the sensors on the onboard the robot and combining all that, all these different feeds into single source of truth, where the robot creates essentially a digital representation of the world and tries to get where it is within that space.

Owen Nicholson (08:01):

And, the problem has always really been that actually the number one cause of failure for a robot, isn’t the wheels falling off for it falling over. Although there are some funny videos on the internet about that actually, when it comes to real hardcore robotics development, the challenges that are faced on mainly around the discrepancies between the robots, understanding of the space and its reality. So that’s what causes it to crash into another object is what causes it to get lost on the way back to the charging station and therefore not actually get there in time. And so it runs out of juice and just dies. So the high-level problem we are trying to do, trying to address is giving developers the ability to answer these questions without having to be deep, deep experts in the fundamental algorithms that are allow you to do that.

Owen Nicholson (08:53):

Because there’s an explosion of robotics companies at the moment, and it’s super exciting, seeing all these different new applications coming out, really driven from lower cost hardware coming out and modular software, which allows you to quickly build POC and demos and early stage prototypes. But there’s a still, when you really drill into it only, well, probably well over 90% of those machines today will not scale to a commercially viable product as it stands, if they literally went and tried to sell that to hardware and that software right now. So this is where most of the energy has been focused on by the companies trying to modify what they have to be more accurate, more reliable, or reduce the cost, and actually nearly all most of the times, all three. So they’re nearly always trying to make it increase the performance and reduce the cost.

Owen Nicholson (09:43):

And this is phenomenally time consuming. It’s very expensive. It can be, cause it’s lots of trial and error, especially if you have a robot where say a service robot where you need to shut down a supermarket to be able to even do your testing. You might only get an hour a month to be able to do that with your client. And this is such a critical time. And if you spend the entire time just trying to get the thing from A to B, not actually worrying about what does it do when it gets there. This is really what’s holding back the industry as a whole.

Philip English (10:13):

Right. I see. So if I’m obviously like a manufacturer and I want to build a solution again for like retail or education or when you’re in hospital, was this quite good? Quite good one then basically then slam core is one of the components that I can bring into the product that I’m building. And again, it’s got all the expertise, it’s got everything It needs to make sure it does a brilliant job on the vision side. So the one, obviously it, that helps with costs on manufacturing, on a new product, and then it’s easier for the customer to launch the products, knowing that it’s got a branded and obviously a safe way of localization.

Owen Nicholson (10:54):

Absolutely, so they start time to market for a commercially viable system. So you can build something within a month. In fact, at the end of master’s projects, you quite often will have a robot, which is able to navigate and get from A to B, but doing it in a way, which especially when the world starts to get a bit more chaotic, that’s probably the real real challenge, is when you have people moving around structures, changing the standard systems today, just, they just don’t work in those environments. They don’t work well enough to, especially when you have a hundred, a thousand, 10,000 robots, if your mean time between failure is once every two weeks, that’s okay for your demo, but it doesn’t work when you’ve got 10,000 robots deployed to across a wide areas. So, yeah.

Philip English (11:40):

Yeah. this is it. I mean, from what I’ve seen, it’s all about movement. And as you said, like you can do a demo with a robots sort of show it working in an environment that’s half empty and no one’s really around, but once when it’s a busy environment, busy retail, lots of people, lots of movement. And it’s very easy for obviously the robot can get confused and say, I know, is that a person? Is that a wall? Where am I sort of sort of thing? And then that’s it, it loses its localization, and then you start, so I suppose the question I had was around the technology. So I saw on one of your videos, obviously you were using one of the Intel cameras, but is it, can you link it with sort of any laser scanner, any LIDAR scanner? Is there a certain tech, product range that you need to integrate as well for Slam Core to work best or.

Owen Nicholson (12:30):

Absolutely great question. I think this is one of the really interesting, when does the technology become a commercial product questions? Because the answer is, if you lock down the hardware and you work just on one specific hardware sensor combination, then you can build a system which works well, particularly with vision. So you don’t, if you look at some of the products out there already Oculus quest, I know it’s not a robot, but ultimately it’s answering very similar questions, whereas the headset, what are the objects around it? Same with the hollow lens that iRobot Rumba and a number of other questions, they’ve all successfully integrated vision into their robotic stacks. And it works. They work very well on low cost hardware. The challenge has been then if you don’t have those kinds of resources, if you’re a com, if you’re not Facebook or Microsoft or iRobot.

Owen Nicholson (13:32):

So then you have to, a lot of the companies are using much more open source solutions. and they, quite often use laser-based localization. This is the very common approach in this industry. and we are not anti laser at all. LIDAR is an incredible technology, but you shouldn’t need a $5,000 LIDAR on your fleet of robots, just for localization. And that’s currently where we are in, in this industry. The reality is there are cheaper ones. Absolutely. But to get ones that actually work in more, more dynamic environments, you need to be spending a few thousand dollars on your lasers. So we are at the heart of our system is we process the images from a camera. We extract the spatial information. So we look at the pixels and how they flow just to get the sense of geometry within the space.

Owen Nicholson (14:21):

So this gives you your coordinates. It gives you the surface shape of the world. So your floor plan and where are the obstacles irrelevant to what they are, but what if there’s something in my way? And that’s kind of the first level, our algorithms, operator, but then we also are able to take that information and use our proprietary machine learning algorithms to draw out the higher level spatial intelligence, which is the obstacle object names there, the segment segmenting them out, looking at how they’re moving relative to other parts of the environment. And that all means that we’re able to provide much richer, spatial information than you can achieve with, even that the high-end 3d lidars that you have available today. Just to address your question directly, as far as portability between hardware, this is one of those real challenges, because if we’d have decided three years ago to just lock it down to one.

Owen Nicholson (15:22):

So the Intel real sense, it’s a great sensor. They’ve done a really good job. And if we’d have just decided to work with that and optimize only for that today, we would have something which, as extremely high-performing, but you wouldn’t be able to move it from one product to another. If another sensor was out there at a different price point, it wouldn’t port. So we’ve spent a lot of our energy taking our core algorithms and then building tools and APIs around them. So that a developer can actually integrate into a wide range of different hardware options using the same fundamental core algorithms, but interacting with them through different sensor combinations. Because the one thing we know in this entire industry, there’s a lot of unknowns, but probably the one thing we all know is there’s no one robot which will be the robot that works everywhere, just like in nature.

Owen Nicholson (16:10):

There’s no one animal. Although by, as an aside, nature uses vision as well. So there’s clearly some benefits that evolution has chosen a vision as its main sensing modality, but we need variety. We need flexibility and it needs to be easy to be able to move from one hardware configuration to the next. And that’s exactly what we’re building at slam core. Our approach at the moment is to optimize for certain hardware. So the real sense right now is our sense of choice. And it works out of the box. You can be up and running within 30 seconds with a real sense sensor, but if you come along with a different hardware combination, we can still work with you. They might just need a bit supporting, but we’re not talking blue sky research, we’re talking few, a few weeks of drivers and API design to get that to work.

Philip English (16:59):

Right. Fantastic. And I suppose every year you also get a new version of a camera, like coming out as well. So a new version of the Intel real sense, which would obviously it’s normally advanced version and it’s the best version. And then it will see, I suppose that helps in your three key levels, which is what I was just wanting to quit. Quit to go over. Cause obviously like you discussed them there. So you had three levels, there was tracking math in and the semantics. So that’s basically what you were saying. So we’ll see, your algorithm stage then is that level three? Is that the semantics, was that level two diffuse?

Owen Nicholson (17:38):

So we actually, we call it full stack, spatial understanding. So we actually cut, we provide the answers to all three, but within a single solution, and this has huge advantages, because, well, Hey, there’s performance advantages, but there’s also, you’re not processing the data in lots of different ways and you, it means you can answer these questions using much lower cost Silicon and processes because you are essentially building on top of each one’s feeds into the next. So for example, our level one solution is tracking gives you very good, positioning information, and then all level two is the shape of the world, but we can feed the position into the map so that you get a better quality map. And then we can feed that map using the semantics to identify dynamic objects and remove them before they’re even mapped. So that you don’t confuse the system. And then this actually improves the positioning system as well, because you’re no longer measuring your position against things which are non static. So there’s this real virtuous circle of taking a full stack approach. And it’s only really possible if you understand the absolute fundamental mathematics, going on so that you can optimize across the stack and not within your individual elements with across it.

Philip English (19:01):

Wow. Okay. And then within the algorithm package then, is it a constantly learning system? So we’ll see if we’ve, developed, like a mobile Charlie to go around a factory and someone puts a permanent house there or permanent obstacle, will it, learn to say, okay, that obstacle is there now and include that in an increase into the map.

Owen Nicholson (19:29):

It’s absolutely that’s one of our core features, which is what we call, lifetime mapping. So, currently with most systems, you would build your map. This is how a lot of the LIDAR localization works. You’d build a map with a essentially a master run. You’d save that map. Maybe pre-process it to get it as accurate as possible. And that becomes your offline reference map off, which everything localizes against. So right now we provide that functionality today using vision instead of LIDAR. So you actually, you already get a huge amount, more tolerance to variation within the scene because we are tracking the ceiling, the floor, the walls, which are normally a lot less likely to have changes. So even if that post appeared it wouldn’t actually change the behavior of the entire system.

Owen Nicholson (20:20):

But we are also later this year, we’ll be updating our, released to be able to merge maps from different agents and from different runs into a new map. So every time you do, every time you run your system, you can update it with the new information. And this is something which is very well suited to a vision-based approach because we can actually identify, okay, that was a post probably more interestingly, maybe pallets or something where a pallet gets stuck in the middle of the warehouse. And you don’t want to, maybe during that day, you want to communicate to the fleet that there’s a pallet here. So you don’t want to pan your plan, your path through it. But then the next day you might want to remove that information entirely because it’s unlikely to still be there. So it, we ultimately don’t provide the final maps and the final systems. We provide the information that the developers can then use. They can use their own strategies, because this is key that some applications might want to know and keep all the dynamic objects in their maps and might want to ignore them entirely. So we really just provide the location, the positions of those objects, in a very clean API so that people can actually use it themselves.

Philip English (21:40):

Right. I see. And then when you said, so an emergence of other mapping tools, so I’ve seen the old classic where you have like someone who has a laser scanner on like a pole, and then he’s walking around the factory or walking around the hospital to create a 3d map. And then, so can you take that data and merge it in with your data to get like a more like accurate map is that we may not.

Owen Nicholson (22:07):

At the moment, we don’t fuse maps created from other kinds of systems. Like it went to our system. We ultimately would want to consume the raw data from that laser and fuse it into our algorithms. So right now, our support is for version inertial, sensors, and wheel Adometry. LIDAR support will come later in the year where it’s just a matter of engineering resource at the moment, algorithmically it’s all supported, but from a engineering, an API point of view, that’s where a lot of the work it’s that last 10%, a lot of people will tell you about is quite often 90% of the work. I know something to be 80 20, but I think that thinking robotics is more like 90 10 and so we wouldn’t support that sort of setup at the moment, but the answer is you shouldn’t need to do that because with those systems, you need to be very accurate, quite often, be careful how you move the LIDAR. Also, you need a lot of compute. You also need to do it quite often offline post-processing. Whereas our system is all real time on the edge. It runs using vision, and you can build a 3d model of that space all in real time, as you see it being created in front of you on the screen, so that you can actually go back and, oh, I missed that bet I’ll scan there. So this is really kind of a core part of our offering.

Philip English (23:32):

Right. Fantastic. Fantastic. I suppose the last question I have in regards to the solution, then we sort of see what, like, what you’ve been going through, which is fantastic is, so is this just for internal, or can you go external as well? I mean, I’ve seen slammed based systems have had issues with, since things like sunlight and rain and weather conditions, is it the incident at that moment? And then it looked at to go external eventually, or where whereabouts does it say?

Owen Nicholson (24:01):

We tend not to differentiate internal external, it’s more to do with the type of environment. So as long as we have light, so we won’t operate in the light soft factory cause we need, we need vision. And as long as the cameras are not completely blinded. So the rough rule of thumb we normally say to our customers are, could you walk around that space and not crash into things? If the answer to that is yes, then there’s them. We will, we may have to do some tuning for some of the edge cases around auto exposure and some of the way in which we fuse the data together. But we already have deployments in warehouses which have large outdoor areas and indoor area. So they’re transitioning between the two. We are not designing a system for the road or for city scale, autonomous cars slam, which really is a different approach you would take. And that’s where a lot of those more traditional problems, you just talked about rain and those types of areas really starts to become an issue. But for us we support indoor outdoor, whether it’s a lawn mower or a vacuum cleaner the system will still work.

Philip English (25:12):

Right. Fantastic. Yeah. Yeah. And I think this is it. I mean, we’re starting to see a lot more outdoor robots coming to market probably more over in the U S but also that’s going to be the future. So we’ll see. Yeah. Like the whole markets there, I suppose it sort of leads onto the bigger sort of vision for you guys then. I mean like where do you see the company in like five years time and like technology wise, I suppose. And what’s the ultimate goal, I suppose to get perfect vision, like similar to, cause humans have vision, I quite liked your animal, like analogy there actually, because obviously vision is one of the cool things, but as far as like what’s the why for you guys and the next sort of steps?

Owen Nicholson (25:58):

Yeah. I think really, we founded the company because the core technology being developed has so much potential to have a positive impact on the world. And it’s essentially the ability for robots to see, and that can be used for so many different applications. So the challenge has always been doing that flexibly whilst keeping the performance and costs at a price at something that makes sense. And we’re now demonstrating through our SDK. So we actually, the SDK is publicly available if you request access and you’re able to download it, we already have over a hundred companies running it and we’ll have about a thousand companies in waiting as we start to onboard them. So we’ve demonstrated that as possible to deliver this high quality solution in a flexible and configurable way.

Owen Nicholson (26:50):

And this means we are essentially opening up this market to people who may be in the past, would not have been able to get their products to that commercial level of performance to be successful. So having a really competitive, and also collaborative ecosystem of companies working together, trying to identify new ways to use robot is got to be good for us as an industry, because if it’s just owned by a couple of tech giants or even states then that’s going to kill all of the competition. So, and this will drive some of the really big applications for robotics. We see in the future in five years time, I believe there’ll be robots, maintaining large, huge, renewable energy infrastructures at the scale, which would be impossible to manage with people driving around machines and looking at sustainable agriculture in a way, which means that we can really target water and pesticides so that we can really feed the world as we start to grow.

Owen Nicholson (27:55):

And yeah, we’ll let you, you’ve seen all of the great work going on Mars with the Rover up there now with perseverance and we’re using visual slam, ultimately, it’s not ours, unfortunately, but in the future, we would like us as our systems to be running on every robot on the planet and beyond that’s really where we want to take this. And it has to be, we have to make sure that these core components are available to as many people as possible so that they can innovate and they can come up with those next-generation robotic systems, which will change the world. And we want to be a key part of that, but really sitting in the background, not living vicariously through our customers. I quite often say I want Slam Core to be the biggest tech company that no one’s ever heard of and running up, having our algorithms running on every single machine with vision. But never having our logo on the side of the product.

Philip English (28:56):

Yeah, well, this is it. And this is the thing that excites me about, like robotics and automation. I mean, if you think about the it industry, obviously you have a laptop, you have a screen and you have a computer, or obviously see there’s lots of big players, but you’re pretty much getting the same thing, but with robotics, you’re going to have all sorts of different technologies, different mechanical, physical machines, and it’s going to be a complete mixture. And I mean some companies will build things similar and to do one job where you may have different robots with different jobs. And yeah, now, I think that sounds great. I mean, obviously what if you can solve that vision issue that we have, it makes it a lot easier for start ups, you know, and a lot easier for businesses to take on the technology, get the pricing down, because obviously if you don’t want robust, wasn’t hundreds of thousands of pounds, you want them at a level where they’re well-priced, so they can do a good job and actually in the end, help us out with whatever role that and the robots do.

Philip English (29:54):

And so, yeah, now, that sounds really exciting. And actually, I’m looking forward to that to keep an eye on you guys. I mean, what’s the best way to stay in contact with you, then what’s the best way to get,

Owen Nicholson (30:06):

Genuinely head to this website and click on the request access button, if you’re interested in actually trying out the SDK, we’re currently in beta rollout at the moment, focusing on companies with products and developments. So if you are building a robot and are looking to integrate vision into your autonomy stack, then request access, we can onboard you within minutes. It’s just a quick download. And as long as you have hardware we support today, you can run and run the system. We have a mailing list as well, where we want to keep people up to date as things as exciting announcements come. And that’s really probably the best way is just to sign up to either our meeting list or our waiting list.

Philip English (30:57):

Right. Perfect. Thank you, Ron. And, what I do guys, is I’ll put a link to all the websites and everything and some more information about Slam Core. So, yeah, now, it was great. It was great interviewing, many thanks for your time. And yeah, like, I’m looking forward to keep an eye on you guys and there, and see, I seen your progression. Thank you very much

Owen Nicholson (31:15):

Absolutely.

SLAMCORE interview with Owen Nicholson

Slamcore: https://www.slamcore.com/

Philip English: https://philipenglish.com/slamcore/

Robot Score Card:- https://robot.scoreapp.com/

Sponsor: Robot Center: http://www.robotcenter.co.uk

Robot Strategy Call:- https://www.robotcenter.co.uk/pages/robot-call

You might be interested in …