So, we have had the time to play a little with the code but we discovered Formula PI at the 11th hour so really haven't had much time to get up to speed. The code as it comes is doing much of the heavy lifting for us, handling the decision making for dealing with vehicles on the track ahead, overtaking and collisions. So on this level it's not really much of a competition is it? Just pick your lane and go.
It is obvious to win this we need to do more than tinker round the edges and take it to the next level. So, how far do our options go for tapping into information about obstacles ahead? Can we determine there is a car to the left or a wall to the right? I have noted that it seems to make its own decisions based on the number of unrecognised points in the field of vision. I'm just starting to muse over how we could use this, if at all.
There is a lot of potential for code improvement, the avoidance logic is a particularly good place to start.
Potentially you have the whole camera image you could look at to try and detect the side walls or another robot from. If you could make this reliable you would be well on your way to a winning entry :)
I will avoid saying too much, but it would be worth seeing the set of raw images we posted: MonsterBorg self-driving footage to get an idea of how things look, in particular the effect of motion bur on the image detail. Our simplistic detection is not very reliable (see the footage from last series here) and could definitely be improved or replaced to get better results.
Of course you can't give the game away. Reassuring to know I'm not barking up the wrong tree mind you.
Add new comment