Hi Everyone,
I was wondering if anyone has thought about or tried rewriting the code in a compiled language like c#. I know lambda.p.racing rewrote the code into Java. I'm wondering if there might be a performance gain to be found if the code was in a compiled language like c# instead of an interpreted language like Python.
I think it might be possible to do this in c#. I was able to write a very basic program (obligatory hello world program!) and compile it on the raspberry pi using mono (https://www.mono-project.com/) and get it to run the .exe file in the pi terminal.
This would be a huge challenge to actually do, and not one I'm really sure I could, but I'm curious about what others thought or if they had any experience in trying this.
Generally speaking a compiled program should work just as well as an interpreted one, so I do not see that as being a problem :)
From memory all of the bits needed to use OpenCV from C / C++ should already be installed. Other languages like C# will either need to use the C++ based library directly or have their own wrapper library added.
We are happy to help out where we can with any team trying to use a different language. If you need us to check if programs run or setup an SD card we should be able to find the time ;)
One more idea to think about. If you are intending to see if you can improve the speed of the code by having it compiled it might be worth looking at Cython as another possible option.
On a slightly different but related path, have you tried PyPy Aaron? I'm planning to give it a try in the future. My thoughts are that the only sticking point may be the Thunderborg library.
Jamie with something like PyPy the code is JIT compiled, so it compiles to native code as it executes and per thread performance is generally significantly higher than the standard python interpreter. It doesn't resolve some of the deeper limits in Python like the GIL which has some implications with multithreading.
I have not tried PyPy, it would be interesting to see how well it would work.
The ThunderBorg library is essentially pure Python code using file I/O to communicate with the board. I do not suspect it would cause any issues, but it would need to be tested with an actual board to prove it is working correctly.
One thing to bear in mind with either Cython or PyPy is that the underlying implementation of OpenCV is already compiled from C++ code. This means that the difficult image processing tasks already run fairly fast.
That's a good point Arron. This would explain why the tests I ran showed speed improvements on things like basic math functions and opening and closing a log file. But when I tried to open, crop and save an image it actually slowed down using Cython (only fractions of a second).
I suspect the Java rewrite was more to do with preferred language rather than speed advantage. Though with two season wins, Thomas must be doing something right!
I think I'll leave this idea for now and focus on other areas to make improvements. I think I've got the image detection bugs worked out in my current code and my last race I actually managed to complete a reasonable amount of laps.
Arron, if you were to improve the base code, what would be your top 3 areas to focus on?
There are two things that immediately stand out as being good areas to focus on:
Picking the third option is harder, there are a few good contenders:
Generally speaking the standard code works well when things are fine, but it struggles with obstacles, crashes, and mistakes.
Do you have any suggestions on how to go about these? Is there something so obvious that we're all missing? I hope it's ok to ask and you're not giving any big hints to one team since everyone can see this?
Here is what I've done so far or think could be done. I'm share some of my ideas and strategies in the hopes that others might do the same.
1. Detecting other robots. I've some one pretty good success using openCV's Haar Cascade. Though training it on what a monsterborg looks like took some time.
2. Overtaking. At the moment I'm just using the base overtake, but I would probably want something like another mode to switch to that allowed for quicker lane changes
3. Recovery code when stuck. Thats tough since there are many ways to be stuck. You'd have to look at the common ways people get stuck and look at what the camera sees; or doesn't see.
4. Detecting you are driving the wrong way at the edges of the track. Perhaps counting the number of left and right turns you make. Since there should be more right than left you could tell if you're going the wrong direction if the count of left turns was suddenly higher.
5. Improve the driving when flipped over. I guess you could fine tune the amount of cropping the image processing does depending on the orientation of the mosterborg. How far off vertical center the camera? Is there a big difference in height off the track when upside-down and right way up? I've had it said to me that I seem to run better upside-down :)
6. Try to drive across the S-curve in a straighter line. The only way that comes to mind would be to use the WaitForDistance function and once a certain distance is reached disable the line following for a few seconds and drive straight. This could have interesting results.
Does anyone else have any ideas or strategies they'd be willing to share?
Jamie, thanks for sharing some of your ideas and strategies and I also hope others will share as well.
Thanks Arron for the suggestions and Jamie for asking the question. Also thank you Jamie and James T. for their ideas. Here are my thoughts.
Disclaimer... PicoHB has never used the standard code (although there is a lot of the DNA in it since that was my starting point) and it's taken two seasons to get close to being competitive. You might not have noticed because it went horribly wrong soon after but the last full lap that PicoHB did in the B-Final before breaking down was our personal best fastest ever lap, at something like 16.99 seconds, and that felt like a win in itself, even though we came last. :S
1. Detecting other robots.
Walls are black, tyres are black. Our strategy is to try to avoid objects that are black and aim for track that is less black. This doesn't work when you get very close to other robots or bright lights.
2. Overtaking.
PicoHB will rarely overtake a robot, which is an issue for us, and here is why. We tend to slow down to improve the avoidance of obstacles, so the closer we get to the back of a leading robot the slower we will go. All we can do is hope that the robot ahead makes a mistake which isn't ideal.
3. Recovery code when stuck.
We have a similar approach to JT, averaging the difference between a series of images over time (30 seconds does seem excessive?). But rather than going into reverse we send random 'wriggle' signals to try to jolt the robot to a position where it can make a fresh decision about how to proceed.
4. Driving the wrong way at the edges.
Until the tail end of last season PicoHB was using a method which included the track edges for wrong way detection, eg if you see black-green to the left of green-blue then turn around, however in the end I was getting too much false black-detection in the track, triggering too many false spins. So recently I have gone back to only using the red-green line for spin detection and am trying to avoid following lines at the edges. I feel that the field of view of the robot is about three lanes wide about halfway up the image so a small amount of oscillation in the steering should make sure we see the centre line at least every few seconds.
5. Driving whilst flipped.
As per James T's answer finding the best place to crop the image while flipped matters a lot. I feel like it should be necessary to have an alternative set of PID values for running upside down but I only use the natural PID of the track so I can't suggest any changes to values. In the standard code (from memory) there is a value which multiplies the distance between lines based on how high up the image the scan is, this value would definitely need to vary between flipped/non-flipped - the 'vanishing point' of a flipped image is much lower down than in an upright image.
6. Cutting the chicane.
Shush this is where PicoHB works best, don't want to give away our secrets! Buy yes, as JT says less reliance on line-following seems to be the key.
7. Other suggestions.
Absolutely critical seems to be restarting quickly after a battery disconnect/reboot. I feel like the standard code tends to exit the program whenever an error is detected, ours runs on an infinite loop of retries. Wrapping (python) code in try/except blocks helps to keep the robot at least doing something even if a minor exception is raised elsewhere. Also doing more logging does help, especially where you have future improvememts in mind, and capture as many images as you can for future testing. On image capturing, it seems sensible to move the image capturing to a separate thread and to not save images in-line with other processing. PicoHB will skip the saving of images if other processing is taking place, to prioritse better driving over image capturing. By using picamera rather than opencv videocapture you can get much better control over the camera, but that is a whole other discussion in itself.
8. Hold on, wasn't there only supposed to be 3!?
;)
Jon
Add new comment